To people claiming a physical raid is pointless from the point of gathering data :
- you are thinking about a company doing good things the right way. You are thinking about a company abiding by the law, storing data on its own server, having good practices, etc.
The moment a company starts to do dubious stuff then good practices start to go out the window. People write email with cryptic analogies, people start deleting emails, ... then as the circumvention become more numerous and complex, there needs to still be a trail in order to remain understandable. That trail will be in written form somehow and that must be hidden. It might be paper, it might be shadow IT but the point is that if you are not just forgetting to keep track of coffee pods at the social corner, you will leave traces.
So yes, raids do make sense BECAUSE it's about recurring complex activities that are just too hard to keep in the mind of one single individual over long periods of time.
All because "AI nudes"? Seems heavy-handed, almost like the controversy over naughty images has received a state-sponsored outrage boost for other reasons.
"Shocking Grok images"... really? It's AI. We know AI can make any image. The images are nothing but fake digital paintings that lose all integrity as quickly as they're generated.
Beyond comedic kicks for teenage boys, they're inconsequential for everyone else. But nevermind that, hand me a pitchfork and pre-fabricated sign and point me to the nearest anti-Grok protest.
It has always been illegal and morally reprehensible to create, own, distribute or store sexually explicit material that represents a real person without their consent, regardless if they are underage or not.
Grok is a platform that is enabling this en masse. If xAI can't bring in guardrails or limit who can access these capabilities, then they deserve what's coming to them.
At my kids school the children have been using grok to create pics of other children without clothes on - chatgpt etc won’t let you do that - grok needs some controls and x seem unable to do that themselves.
In such a case specifically: Uncover internal communication that shows the company was aware of the problem and ignored it, which presumably affects liability a lot.
This is the cyber crime unit. They will exfiltrate any data they want. They will use employee account to pivot into the rest of the X network. They don't just go in, grab a couple of papers, laptops and phones. They hook into the network and begin cracking.
I really find this kind of appeal quite odious. God forbid that we expect fathers to have empathy for their sons, sisters, brothers, spouses, mothers, fathers, uncles, aunts, etc. or dare we hope that they might have empathy for friends or even strangers? It's like an appeal to hypocrisy or something. Sure, I know such people exist but it feels like throwing so many people under the bus just to (probably fail) to convince someone of something by appealing to an emotional overprotectiveness of fathers to daughters.
You should want to protect all of the people in your life from such a thing or nobody.
It's not hypothetical. And in fact the girl who was being targeted was expelled not the boys who did it [1].
Those boys absolutely should be held accountable. But I also don't think that Grok should be able to quickly and easily generate fake revenge porn for minors.
No, because the comment is in bad faith, it just introduced an unrelated issue (poor sentencing from authorities) as an argument for the initial issue we are discussing (AI nudes), derailing the conversation, and then using the new issue they themselves introduced to legitimize their poor argument when one has nothing to do with the other and both can be good/bad independently of each other.
I don't accept this as good faith argumentation nor does HN rules.
You are the only one commenting in bad faith, by refusing to understand/acknowledging that the people using Grok to create such pictures AND Grok are both part of the issue. It should not be possible to create nudes of minors via Grok. Full stop.
The existence and creation of cigarettes and adult nude magazines is fully legal, only their sale is illegal to kids. If kids try to illegally obtain those LEGAL items, it doesn't make the existence of those items illegal, just the act of sale to them.
Meanwhile, the existence/creation CSAM of actual people isn't legal, for anyone no matter the age.
> A country can ban guns and allow rope, even though both can kill.
That's actually a good argument. And that's how the UK ending up banning not just guns, but all sorts of swords, machetes and knives, meanwhile the violent crime rates have not dropped.
So maybe dangerous knives are not the problem, but the people using them to kill other people. So then were do we draw the line between lethal weapons and crime correlation. At which instruments? Same with software tools, that keep getting more powerful with time lowering the bar to entry for generating nudes of people. Where do we draw the line on which tools are responsible for that?
"Correction: kids made the pictures. Using Grok as the tool."
No. That is not how AI nowdays works. Kids told the tool what they want and the tool understood and could have refused like all the other models - but instead it delivered. And it only could do so because it was specifically trained for that.
"If kids were to "git gud" at photoshop "
And what is that supposed to mean?
Adobe makes general purpose tools as far as I know.
You're beating it around the bush not answering the main question. Anyone skilled at photoshop can do fake nudes as good or even better than AI, including kids (we used it to make fun fakes of teachers in embarrassing scenarios back in the mid 00s), so then why is only the AI tool the one to blame for what the users do, but not Photoshop if both tools can be used to do the same thing?
Punishing kids after the fact does not stop the damage from occurring. Nothing can stop the damage that has already occurred, but if you stop the source of the nudes, you can stop future damage from occurring to even more girls.
>Punishing kids after the fact does not stop the damage from occurring.
Banning AI doesn't stop the damage from occurring. Bullies at school/college have been harassing their victims, often to suicide for decades/centuries before AI.
I'm sorry, did the article or anyone in this subthread suggest banning AI? That seems like quite a non-sequitur. I'm pretty sure the idea is to put a content filter on an online platform for one very specific kind of already-illegal content (modified nude images of real people, especially children), which is a far cry from a ban. Nothing can stop local diffusion or Photoshop, of course, but the hardware and technical barriers are so much higher that curtailing Grok would probably cut off 99% or more of the problem material. I suppose you'll tell me if any solution is not 100% effective we should do nothing and embrace anarchy?
Edit for the addition of the line about bullying: "Bullying has always happened, therefore we should allow new forms of even worse bullying to flourish freely, even though I readily acknowledge that it can lead to victims committing suicide" is a bizarre and self-contradictory take. I don't know what point you think you're making.
Child sexual abuse material is literally in the training sets. Saying "banning AI" as though it's all the same thing, and all morally-neutral, is disingenuous. (Yes, a system with both nudity and children in its dataset might still be able to produce such images – and there are important discussions to be had about that – but giving xAI the benefit of equivocation here is an act of malice.)
They may well get in trouble, but in that takes time, in the meantime photos will have been seen by most kids in school + you might get a year of bullying.
Education might be so disrupted you have to change schools.
You're defending X/Grok as if it's a public social platform.
It is a privately controlled public-facing group chat. Being a chat-medium does not grant you the same rights as being a person. France isn't America.
If a company operates to the detriment and against the values of a nation, e.g. not paying their taxes or littering in the environment, the nation will ask them to change their behavior.
If there is a conspiracy of contempt, at some point things escalate.
I'm in the same boat. We have literally pedos and child abusers in the epstein files talking openly about doing despicable things to women, kids and even babies, while authorities are focused on criminalizing generating images of fake minors that don't exist and that any other LLM platform can do if asked.
Plus, how do you even judge the age of AI generated fake people to say it's CP? Reminds me when UK activists were claiming Grok's anime girl avatar was a minor and deserved to be considered CP, when she had massive tits that no kid has. So how much of this is just a political witch-hunt looking for any reason to justify itself?
You want the French authorities to focus on the Epstein files to the exclusion of all other corporate misbehaviour?
Also, it seems pretty likely that Musk is tangled up with the Epstein shit. First Musk claimed he turned down offer to go to the island. Now it turns out Musk repeatedly sought to visit, including wanting to know when the "wildest" party was happening, after Epstein was already known as a child sex abuser. Musk claimed that Epstein had never been given a tour of SpaceX but it turns out he did in 2013. It's the classic narcissistic "lie for as long as possible" behaviour. Will be interesting to see what happens as more is revealed.
>You want the French authorities to focus on the Epstein files to the exclusion of all other corporate misbehaviour?
No i said no such thing, what I said was that the resources of authorities is a finite pie. If most of it goes towards petty stuff like corporate misbehavior that hurts nobody, there won't be enough for the grave crimes like actual child abuser that actually hurt real people.
Same how police won't bother with your stolen phone/bike because they have bigger crimes to catch. I'm asking for the same logic be applied here.
The same guy responsible for creating child porn that you are defending is also in the Epstein's list. Also, don't abbreviate child pornography, it shows you have a side on this
THat's like the 1993 moral panic that video games like Doom cause mass shootings, or the 1980's mass panic that metal music causes satanist, or the 1950s moral panic that superhero comic book violence leads to juvenile delinquency.
Politicians are constantly looking for an external made up enemy to divert attention to from the real problems.
People like Epstein and mass woman/child exploitation have existed for thousands of years in the past, and will exist thousands of years in the future. It's part of the nature of the rich and powerful to execute on their deranged fetishes, it's been documented in writing since at least the Roman and Ottoman empires.
Hell, I can guarantee you there's other Epsteins operating in the wild right now, that we haven't heard of (yet), it's not like he was in any way unique. I can also guarantee you that 1 in 5-10 normal looking people you meet daily on the street have similar deranged desires as the guests on Epstein's island but can't execute on them because they're not as rich and influential to get away with it, but they'd do it if they could.
> THat's like the 1993 moral panic that video games like Doom cause mass shootings,
Apart from doom wasn't producing illegal content.
the point is that grok is generating illegal content for those jurisdictions. In france you can't generate CSAM, in the UK you can't distribute CSAM. Those are actual laws with legal tests, none of them need to be of actual people, they just need to depict _children_ to be illegal.
Moral panics require new laws to enforce, generally. This is just enforcing already existing laws.
More over, had it been any other site, it would have been totally shut down by now and the servers impounded. Its only because musk is close to trump and rich that he's escaped the fate than you or I would have had if we'd done the same.
>Apart from doom wasn't producing illegal content.
Sure but where's the proof that Grok is actually producing illegal content? I searched for news sources, but they're just all parroting empty accusations not concrete documented cases.
Another line of reasoning is that with more fake CP it is more difficult to research the real CP hunt down the perpetrators and consequently save children.
Oh yeah, because the main reason why EPstein and his guests got away with it for so long, is because there was so much low hanging CP out there confusing authorities and prosecutors, not because of the corruption, cronyism and political protection they enjoyed at the highest levels of government.
But how about the "1 in 5-10 normal looking people you meet daily on the street have similar deranged desires as the guests on Epstein's island but can't execute on them because they're not as rich and influential to get away with it, but they'd do it if they could."
> Another line of reasoning is that with more fake CP it is more difficult to research the real CP hunt down the perpetrators and consequently save children.
(own quote)
Yes, the predators existed before AI, but also:
> I think the reasoning is that the AI contributes to more offenders (edited).
(own quote, edited)
To be clear, I don't think this line of reasoning is entirely convincing, but apparently some people do.
I remember when CSAM meant actual children not computer graphics.
Should platforms allow violent AI images? How about "R-Rated" violence like we see in popular movies? Point blank executions, brutal and bloody conflict involving depictions of innocent deaths, torment and suffering... all good? Hollywood says all good, how about you? How far do you take your "unacceptable content" guidance?
There are multiple valid reasons to fight realistic computer-generated CSAM content.
Uncontrolled profileration of AI-CSAM makes detection of "genuine" data much harder, prosecution of perpetrators more difficult and specifically in many of the grok cases it harms young victims that were used as templates for the material.
Content is unacceptable if its proliferation causes sufficient harm, and this is arguably the case here.
> I remember when CSAM meant actual children not computer graphics.
The "oh its photoshop" defence was an early one, which required the law to change in the uk to be "depictions" of children, so that people who talk about ebephiles don't have an out for creating/distributing illegal content.
If you find yourself bringing out your dick when "brutal and bloody conflict" comes up on your screen, perhaps you should talk to a psych professional.
Frankly it sounds to me like a "show me the man and I'll show you the crime" kind of operation. France and the UK, and judging by yesterday's speech by the PM of Spain maybe the whole EU might be looking to do what China and Russia did earlier on and start cracking down on foreign social media by making it impossible to operate without total alignment with their vision and not just their (new) rules. Together with a push for local alternatives, that currently don't seem to be there, it may spell the end for a big chunk of the Global social network landscape.
I still believe that the EU and aligned countries would rather have America to agree to much tighter speech controls, digital ID, ToS-based speech codes as apparently US Democrats partly or totally agree to. But if they have workable alternatives they will deal with them from a different position.
Yes, if you don't follow EU laws prepare to not do business in Europe. Likewise, if you don't follow US laws I'd advise against trying to do business in USA.
that was decades later, but yea I don't think for a second that was justifiable - not even considering that China had completely closed shop for America decades earlier and this was a 1-way openness relationship for a long time; they could have sold this as a reciprocity issue but they didn't
esp. when America already controls the main outlets through Android Play Store and Apple Store, and yep, they have proven to control them not just happen to host them as a country
arguably America did have valid security concerns with Huawei though, but if those are the rules then you cannot complain later on
It's worth pointing out that in France and the UK, the authorities involved are arms length independent of the political bodies - it's not like the US where if you give the President good vibes you can become head of the FBI, and all you have to do in return is whatever he says. There are statutory instruments (in France, constitutional clauses), that determine the independence of these authorities.
They are tasked - and held to account by respective legislative bodies - with implementing the law as written.
Nobody wrote a law saying "Go after Grok". There is however a law in most countries about the creation and dissemination of CSAM material and non-consensual pornography. Some of that law is relatively new (the UK only introduced some of these laws in recent years), but they all predate the current wave of AI investment.
Founders, boards of directors and their internal and external advisors could:
1. Read the law and make sure any tools they build comply
2. When told their tools don't comply take immediate and decisive action to change the tools
3. Work with law enforcement to apply the law as written
Those companies, if they find this too burdensome, have the choice of not operating in that market. By operating in that market, they both implicitly agree to the law, and are required to explicitly abide by it.
They can't then complain that the law is unfair (it's not), that it's being politicised (How? By whom? Show your working), and that this is all impossible in their home market where they are literally offering presents to the personal enrichment of the President on bended knee while he demands that ownership structures of foreign social media companies like TikTok are changed to meet the agenda of himself and his administration.
So, would the EU like more tighter speech controls? Yes, they'd like implementation of the controls on free speech enshrined in legislation created by democratically appointed representatives. The alternative - algorithms that create abusive content, of women and children in particular - are not wanted by the people of the UK, the EU, or most of the rest of the World, laws are written to that effect, and are then enforced by the authorities tasked with that enforcement.
This isn't "anti-democratic", it's literally democracy in action standing up to technocratic feudalism that is an Ayn Randian-wet dream being played out by some morons who got lucky.
European courts have repeatedly said that in France the procureur (public prosecutor) isn’t an “independent judicial authority”.
The European Court of Human Rights has reminded this point (e.g. 29 Mar 2010, appl. no. 3394/03), and the Court of Justice of the European Union reaches a very similar conclusion (2 Mar 2021, C-746/18): prosecutors are part of the executive hierarchy and can’t be treated as the neutral, independent judicial check some procedures require.
For a local observer, this is made obvious by the fact that the procureur, in France, always follows current political vibes, usually in just a few months delay (extremely fast, when you consider how slowly justice works in the country).
> It's worth pointing out that in France and the UK, the authorities involved are arms length independent of the political bodies
As someone who has lived in (and followed current affairs) in both of these countries, this is a very idealistic and naïve view. There can be a big gap between theory and practice
> There are statutory instruments (in France, constitutional clauses), that determine the independence of these authorities.
> They are tasked - and held to account by respective legislative bodies -
It's worth nothing here that the UK doesn't have separation of powers or a supreme court (in the US sense)
I'm not saying I'm entirely against this, but just out of curiosity, what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?
It was known that Grok was generating these images long before any action was taken. I imagine they’ll be looking for internal communications on what they were doing, or deciding not to do, doing during that time.
There was a WaPo article yesterday, that talked about how xAI deliberately loosened Grok’s safety guardrails and relaxed restrictions on sexual content in an effort to make the chatbot more engaging and “sticky” for users. xAI employees had to sign new waivers in the summer, and start working with harmful content, in order to train and enable those features.
I assume the raid is hoping to find communications to establish that timeline, maybe internal concerns that were ignored? Also internal metrics that might show they were aware of the problem. External analysts said Grok was generating a CSAM image every minute!!
What do they hope to find, specifically? Who knows, but maybe the prosecutors have a better awareness of specifics than us HN commenters who have not been involved in the investigation.
What may they find, hypothetically? Who knows, but maybe an internal email saying, for instance, 'Management says keep the nude photo functionality, just hide it behind a feature flag', or maybe 'Great idea to keep a backup of the images, but must cover our tracks', or perhaps 'Elon says no action on Grok nude images, we are officially unaware anything is happening.'
Or “regulators don't understand the technology; short of turning it off entirely, there's nothing we can do to prevent it entirely, and the costs involved in attempting to reduce it are much greater than the likely fine, especially given that we're likely to receive such a fine anyway.”
You appear to have lost the thread (or maybe you're replying to things directly from the newcomments feed? If so, please stop it.), we're talking about what sort of incriminating written statements the raid might hope to discover.
out of curiosity, what do they hope to find in a raid of the french offices, a folder labeled "Grok's CSAM Plan"?
You're not too far off.
There was a good article in the Washington Post yesterday about many many people inside the company raising alarms about the content and its legal risk, but they were blown off by managers chasing engagement metrics. They even made up a whole new metric.
There was also prompts telling the AI to act angry or sexy or other things just to keep users addicted.
Even if some kid makes a video of themselves jerking off for their own personal enjoyment, unprompted by anyone else, if someone else gains access to that (eg a technician at a store or an unprincipled guardian) and makes a copy for themselves they're criminally exploiting the kid by doing so.
Not really, otherwise perpetrators will just "I was just looking at it, I didn't do anything as bad as creating it". Their act is still illegal.
There was a cartoon picture I remember seeing around 15+ years ago of Bart Simpson performing a sex act. In some jurisdictions (such as Australia), this falls under the legal definition.
Huge difference here in Europe. CSAM is a much more serious crime. That's why e.g. Interpol runs a global database of CSAM but doesn't bother for mere child porn.
You're wrong - at least from the perspective of the commons.
First paragraph on Wikipedia
> Child pornography (CP), also known as child sexual abuse material (CSAM) and by more informal terms such as kiddie porn,[1][2][3] is erotic material that involves or depicts persons under the designated age of majority. The precise characteristics of what constitutes child pornography vary by criminal jurisdiction.[4][5]
Honestly, reading your link got me seriously facepalming. The whole argument seems to be centered around the fact that sexualizing children is disgusting, hence it shouldn't be called porn.
While i'd agree that sexualizing kids is disgusting, denying that it's porn on that grounds is feels kinda... Childish? Like someone holding their ears closed and shouting loudly in order not to hear the words the adults around them are saying.
"...the encyclopedia anyone can edit." Yes, there are people who wish to redefine CSAM to include child porn - including even that between consenting children committing no crime and no abuse.
> have no idea how anyone could reasonably draw that conclusion from this thread.
> > Honestly, reading your link got me seriously facepalming. The whole argument seems to be centered around the fact that sexualizing children is disgusting, hence it shouldn't be called porn.
Where exactly did you get the impression from I made this observation from this comment thread?
Your interpol link seems to be literally using the same argument again from a very casual glance btw.
> We encourage the use of appropriate terminology to avoid trivializing the sexual abuse and exploitation of children.
> Pornography is a term used for adults engaging in consensual sexual acts distributed (mostly) legally to the general public for their sexual pleasure.
CSAM is the woke word for child pornography, which is the normal.word for pornography involving children. Pornography is defined as material aiming to sexually stimulate, and CSAM is that.
I'd say it was the other way around, MAP is an attempt at avoiding the stigma of pedophile, while CSAM is saying "pornography can be an entirely acceptable, positive, consensual thing, but that's not what 'pornography' involving children is, it's evidence of abuse or exploitation or..."
> and no crime was prevented by harassing local workers.
Siezing records is usually a major step in an investigation. Its how you get evidence.
Sure it could just be harrasment, but this is also how normal police work looks. France has a reasonable judicial system so absent of other evidence i'm inclined to believe this was legit.
Well, there is evidence that this company made and distributed CSAM and pornographic deepfakes to make a profit. There is no evidence lacking there for the investigators.
So the question becomes if it was done knowingly or recklessly, hence a police raid for evidence.
See also [0] for a legal discussion in the German context.
> Well, there is evidence that this company made and distributed CSAM
I think one big issue with this statement – "CSAM" lacks a precise legal definition; the precise legal term(s) vary from country to country, with differing definitions. While sexual imagery of real minors is highly illegal everywhere, there's a whole lot of other material – textual stories, drawings, animation, AI-generated images of nonexistent minors – which can be extremely criminal on one side of an international border, de facto legal on the other.
And I'm not actually sure what the legal definition is in France; the relevant article of the French Penal Code 227-23 [0] seems superficially similar to the legal definition of "child pornography" in the United States (post-Ashcroft vs Free Speech Coalition), and so some–but (maybe) not all–of the "CSAM" Grok is accused of generating wouldn't actually fall under it. (But of course, I don't know how French courts interpret it, so maybe what it means in practice is something broader than my reading of the text suggests.)
And I think this is part of the issue – xAI's executives are likely focused on compliance with US law on these topics, less concerned with complying with non-US law, in spite of the fact that CSAM laws in much of the rest of the world are much broader than in the US. That's less of an issue for Anthropic/Google/OpenAI, since their executives don't have the same "anything that's legal" attitude which xAI often has. And, as I said – while that's undoubtedly true in general, I'm unsure to what extent it is actually true for France in particular.
Not really, they put a shit ton of effort in to make sure you can't create any kind of nude/suggestive pictures of anyone. I imagine they have strict controls on making images of children, but I don't feel inclined to find out.
It is very different. It is YOUR 3d printer, no one else is involved. You might print a knife and kill somebody with it, you go to jail, not third party involved.
If you use a service like Grok, then you use somebody elses computer / things. X is the owner from computer that produced CP. So of course X is at least also a bit liable for producing CP.
The safe harbor provisions largely protect X from the content that the users post (within reason). Suddenly Grok/X were actually producing the objectionable content. Users were making gross requests and then an LLM owned by X, using X servers and X code would generate the illegal material and then post it to the website. The entity responsible is no longer done user but instead the company itself.
I'm not trying to make excuses for Grok, but how exactly isn't the user creating the content? Grok doesn't have create images on its own volition, the user is still required to give it some input, therefore "creating" the content.
X is making it pretty clear that it is "Grok" posting those images and not the user. It is a separate posting that comes from an official account named "Grok". X has full control over what the official "Grok" account posts.
There is no functionality for the users to review and approve "Grok" responses to their tweets.
This might be an unpopular opinion but I always thought we might be better off without Web 2.0 where site owners aren’t held responsible for user content
If you’re hosting content, why shouldn’t you be responsible, because your business model is impossible if you’re held to account for what’s happening on your premises?
Without safe harbor, people might have to jump through the hoops of buying their own domain name, and hosting content themselves, would that be so bad?
What about webmail, IM, or any other sort of web-hosted communication? Do you honestly think it would be better if Google were responsible for whatever content gets sent to a gmail address?
I don't have an answer, but the theme that's been bouncing around in my head has been about accessibility.
Grok makes it trivial to create fake CSAM or other explicit images. Before, if someone spent a week on photoshop to do the same, It won't be Adobe that gets the blame.
Same for 3D printers. Before, anyone could make a gun provided they have the right tools (which is very expensive), now it's being argued that 3D printers are making this more accessible. Although I would argue it's always been easy to make a gun, all you need is a piece of pipe. So I don't entirely buy the moral panic against 3D printers.
Where that threshold lies I don't know. But I think that's the crux if it. Technology is making previously difficult things easier, to the benefit of all humanity. It's just unfortunate that some less-nice things have also been included.
Internet routers, network cards, the computers, OS and various application software have no guardrails and is used for all the nefarious things. Why those companies aren't raided?
May be.
We do have codified in law definition of machine gun which clearly separates it from a block of lead. What codified in law definitions are used here to separate photoshop from Grok in the context of those deepfakes and CSAM?
Without such clear legal definitions going after Grok while not going after photoshop is just an act of political pressure.
They don’t provide a large platform for political speech.
This isn’t about AI or CSAM (Have we seen any other AI companies raided by governments for enabling creation of deepfakes, dangerous misinformation, illegal images, or for flagrant industrial-scale copyright infringement?)
> The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.
Do you have any evidence for that? As far as I can tell, this is false. The only thing I saw was Grok changing photos of adults into them wearing bikinis, which is far less bad.
That's why this is an investigation looking for evidence and not a conviction.
This is how it works, at least in civil law countries. If the prosecutor has reasonable suspicious that a crime is taking place they send the so-called "judiciary police" to gather evidence. If they find none (or they're inconclusive etc...) the charges are dropped, otherwise they ask the court to go to trial.
On some occasions I take on judiciary police duties for animal welfare. Just last week I participated in a raid. We were not there to arrest anyone, just to gather evidence so the prosecutor could decide whether to press charges and go to trial.
For obvious reasons, decent people are not about to go out and try to general child sexual abuse material to prove a point to you, if that’s what you’re asking for.
First of all, the Guardian is known to be heavily biased again Musk. They always try hard to make everything about him sound as negative as possible. Second, last time I tried, Grok even refused to create pictures of naked adults. I just tried again and this is still the case:
The claim that they released a tool with "seemingly no guardrailes" is therefore clearly false. I think what instead has happened here is that some people found a hack to circumvent some of those guardrails via something like a jailbreak.
> Also, X seem to disagree with you and admit that CSAM was being generated
That post doesn't contain such an admission, it instead talks about forbidden prompting.
> Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:
That article links to this article: https://x.com/Safety/status/2011573102485127562 - which contradicts your claim that there were no guardrails before. And as I said, I already tried it a while ago, and Grok also refused to create images of naked adults then.
> This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.
I wouldn't even consider this a reason if it wasn't for the fact that OpenAI and Google, and hell literally every image model out there all have the same "this guy edited this underage girls face into a bikini" problem (this was the most public example I've heard so I'm going with that as my example). People still jailbreak chatgpt, and they've poured how much money into that?
France prosecutors use police raids way more than other western countries. Banks, political parties, ex-presidents, corporate HQs, worksites... Here, while white-collar crimes are punished as much as in the US (i.e very little), we do at least investigate them.
Aren't a lot of US pickup trucks basically that? Sure, maybe there's a mechanism for preventing you from installing a baby seat in reverse to position in front of an airbag, but they're also built so that you can't see anything adult human sized 15m in front of the car, let alone anything child-sized.
Comparing Apples and Oranges. Defending this company is becoming cringe and ridiculous. X effed up, and Musk did it on purpose. He uses CSAM to strongman the boundaries of the law. That's not worth defending unless you also say eff the rule of law.
The US would spend 20 years arguing about which agency's jurisdiction it was, and ignore the dead babies?
No, wait, Volvo is European. They'd impose a 300% tariff and direct anyone who wanted a baby-killing model car to buy one from US manufacturers instead.
Let's raid car companies too. We were all born into this. We never had a vote. Thomas Jefferson is said to have written Constitutions ought to be re-written every so often or the dead rule by fiat decree. Let's.
The rich can join in the austerity too. No one voted for them. We been conditioned to pick acquiescence or poverty. We were abused into kowtowing to a bunch of pants shitting dementia addled olds educated in religious crack pottery. Their economic and political memes are just that, memes, not immutable physical truth.
In America, as evidenced by the public not in the streets protesting for single payer comprehensive healthcare, we clearly don't want to be on the hook for each other's lives. That's all platitudes and toxic positivity.
Hopes and prayers, bloodletting was good enough for the Founders!
This vindicates the pro-AI censorship crowd I guess.
It definitely makes it clear what is expected of AI companies. Your users aren't responsible for what they use your model for, you are, so you'd better make sure your model can't ever be used for anything nefarious. If you can't do that without keeping the model closed and verifying everyone's identities... well, that's good for your profits I guess.
This is not about AI but about censorship of a nonaligned social network. It's been a developing current in EU. They have basically been foaming at the mouth at the platform since it got bought.
It's not really different from how we treat any other platform that can host CSAM. I guess the main difference is that it's being "made" instead of simply "distributed" here
Those images are generated from a training set, and it is already well known and reported that those training sets contain _real_ CSAM, real violence, real abuse. That "generated" face of a child is based on real images of real children.
Indeed, a Stanford study from a few years back showed that the image data sets used by essentially everybody contain CSAM.
Everybody else has teams building guardrails to mitigate this fundamental existential horror of these models. Musk fired all the safety people and decided to go all in on “adult” content.
> let's distinguish between generated images, however revolting, and actual child sexual abuse.
Can't because even before GenAI the "oh its generated in photoshop" or "they just look young" excuse was used successfully to allow a lot of people to walk free. the law was tightend in the early 2000s for precisely this reason
"Enough" can always be pushed into the impossible. That's why laws and regulations need to be more concrete than that.
There's essentially a push to end the remnants of the free speech Internet by making the medium responsible for the speech of its participants. Let's not pretend otherwise.
In the UK, you must take "reasonable" steps to remove illegal content.
This normally means some basic detection (ie fingerprinting which is widely used from a collaborative database) or if a user is consistently uploading said stuff, banning them.
Allowing a service that you run to continue to generate said illegal content, even after you publicly admit that you know its wrong, is not reasonable.
if you can be sued for billions because some overbearing body, with a very different ideology to yours, can deem your moderation/censorship rules to be "unreasonable" then what you do is err on the side of caution and allow nearly nothing
this is not compatible with that line of business - perhaps one of the reasons nothing is done in Europe these days
Yes they could have an uncensored model, but then they would need proper moderation and delete this kind of content instantly or ban users that produce it. Or don’t allow it in the first place.
It doesn’t matter how CSAM is produced, the only thing that matters is that is on the platform.
I could maybe see this argument if we were talking about raiding Stable Diffusion or Facebook or some other provider of local models. But the content at issue was generated not just by Twitter's AI model, but on their servers, integrated directly into their UI and hosted publicly on their platform. That makes them much more clearly culpable -- they're not just enabling this shit, they're creating it themselves on demand (and posting it directly to victims' public profiles).
And importantly, this is clearly published by Grok, rather than the user, so in this case (obviously this isn't the US) but if it was I'm not sure Section 230 would apply.
It's a bit of a leap to say that the model must be censored. SD and all the open image gen models are capable of all kinds of things, but nobody has gone after the open model trainers. They have gone after the companies making profits from providing services.
Holding corporations accountable for their profit streams is "censorship?" I wish they'd stop passing models trained on internet conversations and hoarded data as fit for any purpose. The world does not need to boil oceans for hallucinating chat bots at this particular point in history.
Not knowing any better, and not having seen any of the alleged images, my default guess would be they used the exact same CSAM filtering pipeline already in place on X regardless of the origin of the submitted images.
They obviously didn’t really implement anything as you can find that content or involuntary nudes of other people, which is also an invasion of privacy, super easily
If the camera reliably inserts racist filters and the ballpen would add hurtful words to whatever you write, indeed, let them up their legal insurance.
Especially if contracts with SpaceX start being torn up because the various ongoing investigations and prosecutions of xAI are now ongoing investigations and prosecutions of SpaceX. And next new lawsuits for creating this conflict of interest by merger.
Incredible to see all these commenters defending obvious nasty behaviour by a bad individual and a sad company. Are you admiring Elon so much because he has money, success? There are more important things in live. Not being an asshole Nazi admirer, for example.
Honest question: What does it mean to "raid" the offices of a tech company? It's not like they have file cabinets with paper records. Are they just seizing employee workstations?
Seems like you'd want to subpoena source code or gmail history or something like that. Not much interesting in an office these days.
Sadly the media calls the lawful use of a warrant a 'raid' but that's another issue.
The warrant will have detailed what it is they are looking for, French warrants (and legal system!) are quite a bit different than the US but in broad terms operate similarly. It suggests that an enforcement agency believes that there is evidence of a crime at the offices.
As a former IT/operations guy I'd guess they want on-prem servers with things like email and shared storage, stuff that would hold internal discussions about the thing they were interested in, but that is just my guess based on the article saying this is related to the earlier complaint that Grok was generating CSAM on demand.
It is a raid in that it's not expected, it relies on not being expected, and they come and take away your stuff by force. Maybe it's a legal raid, but let's not sugar coat it, it's still a raid and whether you're guilty or not it will cause you a lot of problems.
Agreed its a stretch, my experience comes from Google when I worked there and they set up a Chinese office and they were very carefully trying to avoid anything on premises that could searched/exploited. It was a huge effort, one that wasn't done for the European and UK offices where the government was not an APT. So did X have the level of hygiene in France? Were there IT guys in the same vein as the folks that Elon recruited into DOGE? Was everyone in the office "loyal"?[1] I doubt X was paranoid "enough" in France not to have some leakage.
[1] This was also something Google did which was change access rights for people in the China office that were not 'vetted' (for some definition of vetted) feeling like they could be an exfiltration risk. Imagine a DGSE agent under cover as an X employee who carefully puts a bunch of stuff on a server in the office (doesn't trigger IT controls) and then lets the prosecutors know its ready and they serve the warrant.
Under GDPR if a company processes European user data they're obligated to make a "Record of Processing Activities" available on demand (umbrella term for a whole bunch of user-data / identity related stuff). They don't necessarily need to store them onsite but they need to be able to produce them. Saying you're an internet company doesn't mean you can just put the stuff on a server in the Caribbean and shrug when the regulators come knocking on your door
That's aside from the fact that they're a publicly traded company under obligation to keep a gazillion records anyway like in any other jurisdiction.
Gather evidence against employees, use that evidence to put them under pressure to testify against their employer or grant access to evidence.
Sabu was put under pressure by the FBI, they threatened to place his kids into foster care.
That was legal. Guess what, similar things would be legal in France.
We all forget that money is nice, but nation states have real power. Western liberal democracies just rarely use it.
The same way the president of the USA can order a Drone strike on a Taliban war lord, the president of France could order Musks plane to be escorted to Paris by 3 Fighter jets.
> We all forget that money is nice, but nation states have real power.
I remember something (probably linked from here), where the essayist was comparing Jack Ma, one of the richest men on earth, and Xi Jinping, a much lower-paid individual.
They indicated that Xi got Ma into a chokehold. I think he "disappeared" Ma for some time. Don't remember exactly how long, but it may have been over a year.
From what I hear, Ma made 1 speech critical of the government and Xi showed him his place. It was a few years, a year of total disappearance followed by slow rehab.
But China is different. Not sure most of western europe will go that far in most cases.
Ah, so the daily LARGE protests, in Venezuela, against his kidnapping are not indicative of "the vast majority of Venezuela".
But the celebratory pics, which were claimed to be from Venezuela, but were actually from Miami and elsewhere (including, I kid you not, an attempt to pass off Argentine's celebrating a Copa America win) ... that is indicative of "the vast majority of Venezuela"?
If I were smarter, I might start to wonder why, if President Maduro was so unpopular, why would his abductors have to resort to fake footage - which was systematically outed & destroyed by independent journalists within 24 hours? I mean, surely, enough real footage should exist.
Probably better not to have inconvenient non-US-approved independent thoughts like that.
I never liked the Paul's and their opinions, but I must say that they usually speak according to their principles, rather than make up principles to fit what they want to happen.
To me, that's the distinction between political opponents I can respect, and, well, whatever we're seeing now.
The people of the US mostly wouldn’t like it the people of VZ mostly did and consider Maduro a thug who lost and stayed in power not their president. Ideologies like Paul have trouble with exceptions to their world view.
I mean, come on, we kidnapped him. Yes, he was arrested, but we went into another sovereign nation with special forces and yoinked their head of state back to Brooklyn.
To be fair he isn't legitimate head of state- he lost an election and is officially recognized as a usurper and the US had support of those who actually won.
Large amounts of people call Joe Biden's election illegitimate. You could even say thats the official position of the current government. Would his kidnapping by a foreign nation be okay with you too?
In France it's possible without legal consequences (though immoral), if you call 119, you can push to have a baby taken from a family for no reason except that you do not like someone.
Claim that you suspect there may be abuse, it will trigger a case for a "worrying situation".
Then it's a procedural lottery:
-> If you get lucky, they will investigate, meet the people, and dismiss the case.
-> If you get unlucky, they will take the baby, and it's only then after a long investigation and a "family assistant" (that will check you every day), that you can recover your baby.
Typically, ex-wife who doesn't like the ex-husband, but it can be a neighbor etc.
One worker explains that they don't really have time to investigate when processing reports: https://www.youtube.com/watch?v=VG9y_-4kGQA
and they have to act very fast, and by default, it is safer to remove from family.
The boss of such agency doesn't even take the time to answer to the journalists there...
This is very common, all "think of the children" laws are ripe for abuse. I'm convinced the secrecy around child abuse/child protective services is regularly abused both by abusive parents and abusive officials.
If you call 119 it gets assessed and potentially forwarded to the right department, which then assesses it again and might (quite likely will) trigger an inspection. The people who turn up have broad powers to seize children from the home in order to protect them from abuse.
In general this works fine. Unfortunately in some circumstances this does give a very low skilled/paid person (the inspector) a lot of power, and a lot of sway with judges. If this person is bad at their job for whatever reason (incompetence/malice) it can cause a lot of problems. It is very hard to prove a person like this wrong when they are covering their arse after making a mistake.
afaik similar systems are present in most western countries, and many of them - like France - are suffering with funding and are likely cutting in the wrong place (audit/rigour) to meet external KPIs. One of the worst ways this manifests is creating 'quick scoring' methods which can end up with misunderstandings (e.g. said a thing they didn't mean) ranking very highly, but subtle evidence of abuse moderate to low.
So while this is a concern, this is not unique to France, this is relatively normal, and the poster is massively exaggerating the simplicity.
In Sweden there is a additional review board that go through the decision made by the inspector. The idea is to limit the power that a single inspector has. In practice however the review board tend to rubber stamp decisions, so incompetence/malice still happens.
There was a huge mess right after metoo when a inspector went against the courts rulings. The court had given the father sole custody in a extremely messy divorce, and the inspector did not agree with the decision. As a result they remove the child from his father, in direct contrast to the courts decision, and put the child through 6 years of isolation and abuse with no access to school. It took investigative journalists a while, but the result of the case getting highlighted in media was that the inspector and supervisor is now fired, with two additoal workers being under investigation for severe misconduct. Four more workers would be under investigation but too long time has passed. The review board should have prevented this, as should the supervisor for the inspector, but those safety net failed in this case in part because of the cultural environment at the time.
“ If this person is bad at their job for whatever reason (incompetence/malice) it can cause a lot of problems. It is very hard to prove a person like this wrong when they are covering their arse after making a mistake.”
This seems guaranteed to occur every year then… since incompetence/malice will happen eventually with thousands upon thousands of cases?
I heard there's a country where they can even SWAT you out of existence with a simple phone call, but it sounds so outrageous this must be some evil communist dictatorship third-world place. I really don't remember.
Wait, Sabu's kids were foster kids. He was fostering them. Certainly if he went to jail, they'd go back to the system.
I mean, if you're a sole caretaker and you've been arrested for a crime, and the evidence looks like you'll go to prison, you're going to have to decide what to do with the care of your kids on your mind. I suppose that would pressure you to become an informant instead of taking a longer prison sentence, but there's pressure to do that anyway, like not wanting to be in prison for a long time.
France has Ariane, which was good enough to send Jame Web Telescope to some Lagrange point with extra precision. It's all fun and and games until the French finish their cigarette, arms French Guyana and fire ze missiles.
As they say: you can beat the rap but not the ride. If a state wants to make your life incredibly difficult for months or even years they can, the competent ones can even do it while staying (mostly) on the right side of the law.
We are not entirely sure the rule of law in America isn't already over.
People are putting a lot of weight on the midterm elections which are more or less the last line of defense besides a so far tepid response by the courts and even then consequence free defiance of court orders is now rampant.
We're really near the point of no return and a lot of people don't seem to notice.
> Also, they are restricted in how they use it, and defendents have rights and due process.
As we're seeing with the current US President... the government doesn't (have to) care.
In any case, CSAM is the one thing other than Islamist terrorism that will bypass a lot of restrictions on how police are supposed to operate (see e.g. Encrochat, An0m) across virtually all civilized nations. Western nations also will take anything that remotely smells like Russia as a justification.
Well, that's particular to the US. It just shows that checks and balances are not properly implemented there, just previous presidents weren't exploiting it maliciously for their own gains.
>> they are restricted in how they use it, and defendents have rights and due process.
That due process only exists to the extent the branches of govt are independent, have co-equal power, and can hold and act upon different views of the situation.
When all branches of govt are corrupted or corrupted to serve the executive, as in autocracies, that due process exists only if the executive likes you, or accepts your bribes. That is why there is such a huge push by right-wing parties to take over the levers of power, so they can keep their power even after they would lose at the ballot box.
> Sabu was put under pressure by the FBI, they threatened to place his kids into foster care.
This is pretty messed up btw.
Social work for children systems in the USA are very messed up. It is not uncommon for minority families to lose rights to parent their children for very innocuous things that would not happen to a non-oppressed class.
It is just another way for the justice/legal system to pressure families that have not been convicted / penalized under the supervision of a court.
And this isn't the only lever they use.
Every time I read crap like this I just think of Aaron Swartz.
One can also say we do too little for children who get mistreated. Taking care of other peoples children is never easy the decision needs to be fast and effective and no one wants to take the decision to end it. Because there are those rare cases were children dies because of a reunion with their parents.
>Sabu was put under pressure by the FBI, they threatened to place his kids into foster care.
>That was legal. Guess what, similar things would be legal in France.
lawfare is... good now? Between Trump being hit with felony charges for falsifying business records (lawfare is good?) and Lisa Cook getting prosecuted for mortgage fraud (lawfare is bad?), I honestly lost track at this point.
>The same way the president of the USA can order a Drone strike on a Taliban war lord, the president of France could order Musks plane to be escorted to Paris by 3 Fighter jets.
What's even the implication here? That they're going to shoot his plane down? If there's no threat of violence, what does the French government even hope to achieve with this?
>fighter jets ARE a threat of violence, and it is widely understood and acknowledged.
That's not a credible threat because there's approximately 0% chance France would actually follow through with it. Not even Trump would resort to murder to get rid of his domestic adversaries. As we seen the fed, the best he could muster are some spurious prosecutions. France murdering someone would put them on par with Russia or India.
In the USA they would be allowed to down any aircraft not complying with national air interception rules, that would not be murder. It would be equivalent to not dropping a gun once prompted by an officer and being shot as a result.
I think the implication of the fighter jets is that they force the plane to land within a particular jurisdiction (where he is then arrested) rather than allowing it to just fly off to somewhere else. Similar to the way that a mall security guard might arrest a shoplifter; the existence of security guards doesn't mean the mall operators are planning to murder you.
Guards can plausibly arrest you without seriously injuring you. But
according to https://aviation.stackexchange.com/a/68361 there are no safe options if the pilot really doesn’t want to comply, so there is no “forcing” a plane to land somewhere, just making it very clear that powerful people really want you to stop and might be able to give more consequences on the ground if you don’t.
Planes are required to comply with instructions; if they don't they're committing a serious crime and the fighters are well within their international legal framework to shoot the plane down. They would likely escalate to a warning shot with the gun past the cockpit, and if the aircraft is large enough they might try to shoot out one engine instead of the wing or fuselage.
I suspect fighter pilots are better than commercial pilots at putting their much-higher-spec aircraft so uncomfortably close that your choices narrow down to complying with their landing instructions or suicidally colliding with one - in which case the fighter has an ejector seat and you don't.
I felt like you ruled out collision when you said they're not going to murder, though, granted, an accidental but predictable collision after repeatedly refusing orders is not exactly murder. I think the point stands, they have to be willing to kill or to back down, and as others said I'm skeptical France or similar countries would give the order for anything short of an imminent threat regarding the plane's target. If Musk doesn't want to land where they want him to, he's going to pay the pilot whatever it takes, and the fighter jets are going to back off because whatever they want to arrest him for isn't worth an international incident.
Don’t forget that captain of the plane makes decisions not Elon.
If captain of the plane disobeyed direct threat like that from a nation, his career is going to be limited. Yeah Elon might throw money at him but that guy is most likely never allowed again to fly near any French territory. I guess whole cabin crew as well .
Being clear for flying anywhere in the world is their job.
Would be quite stupid to loose it like truck driver DUI getting his license revoked.
>Don’t forget that captain of the plane makes decisions not Elon.
>If captain of the plane disobeyed direct threat like that from a nation, his career is going to be limited. Yeah Elon might throw money at him but that guy is most likely never allowed again to fly near any French territory. I guess whole cabin crew as well .
Again, what's France trying to do? Refuse entry to France? Why do they need to threaten shooting down his jet for that? Just harassing/pranking him (eg. "haha got you good with that jet lmao")?
Well, when everything is lawfare it logically follows that it won't always be good or always be bad. It seems Al Capone being taken down for tax fraud would similarly be lawfare by these standards, or am I missing something? Perhaps lawfare (sometimes referred to as "prosecuting criminal charges", as far as I can tell, given this context) is just in some cases and unjust in others.
Offline syncing of outlook could reveal a lot of emails that would otherwise be on a foreign server. A lot of people save copies of documents locally as well.
Most enterprises have fully encrypted workstations, when they don't use VM where the desktop is just a thin client that doesn't store any data. So there should be really nothing of interest in the office itself.
Except when they have encryption, which should be the standard? I mean how much data would authorities actually retrieve when most stuff is located on X servers anyways? I have my doubts.
The authorities will request the keys for local servers and will get them. As for remote ones (outside of France jurisdiction) it depends where they are and how much X wants to make their life difficult.
Musk and X don't seem to be the type to care about any laws or any compelling legal requests, especially from a foreign government. I doubt the French will get anything other than this headline.
Getting kicked out of the EU is extremely unattractive for Twitter. But the US also has extradition treaties so that’s hardly the end of how far they can escalate.
White people already extradited to the EU during the current administration would disagree. But this administration has a limited shelf life, even hypothetically just under 3 years of immunity isn’t enough for comfort.
Yes, he is in power since 2000 (1999, actually) but 1999-2012 he was Prime Minister. Only then he became President, which would make the end of his second term 2024. So the current one would be his third term (by the magic of changing the constitution and legal quibbles which effectively allow a president to stay in charge for four almost whole terms, AFAIU).
> France? A nuclear state? Paris is properly sovereign.
That is true. But nukes are not magic. Explain to me how you imagine the series of events where Paris uses their nukes to get the USA to extradite Elon to Paris. Because i’m just not seeing it.
> nukes are not magic. Explain to me how you imagine the series of events where Paris uses their nukes to get the USA to extradite Elon to Paris
Paris doesn’t need to back down. And it can independently exert effort in a way other European countries can’t. Musk losing Paris means swearing off a meaningful economic and political bloc.
France doesn't extradite its citizens, even absolute scumbags like Roman Polanski. Someone like Musk has lots of lawyers to gum up extradition proceedings, even if the US were inclined to go along. I doubt the US extradition treaty would cover this unless the French could prove deliberate sharing of CSAM by Musk personally, beyond reckless negligence. Then again, after the Epstein revelations, this is no longer so far-fetched.
If I'm an employee working in the X office in France, and the police come in and show me they have a warrant for all the computers in the building and tell me to unlock the laptop, I'm probably going to do that, no matter what musk thinks
Witnesses can generally not refuse in these situations, that's plain contempt and/or obstruction. Additionally, in France a suspect not revealing their keys is also contempt (UK as well).
The game changed when Trump threatened the use of military force to seize Greenland.
At this point a nuclear power like France has no issue with using covert violence to produce compliance from Musk and he must know it.
These people have proven themselves to be existential threats to French security and France will do whatever they feel is necessary to neutralize that threat.
Musk is free to ignore French rule of law if he wants to risk being involved in an airplane accident that will have rumours and conspiracies swirling around it long after he’s dead and his body is strewn all over the ocean somewhere.
Killing foreigners outside of the own country has always been deemed acceptable by governments that are (or were until recently) considered to generally follow rule of law as well as the majority of their citizen. It also doesn't necessarily contradicts rule of law.
It's just that the West has avoided to do that to each other because they were all essentially allied until recently and because the political implications were deemed too severe.
I don't think however France has anything to win by doing it or has any interest whatsoever and I doubt there's a legal framework the French government can or want to exploit to conduct something like that legally (like calling something an emergency situation or a terrorist group, for example).
People were surprised when the US started just droning boats in the Caribbean and wiping out survivors, but then the government explained that it was law enforcement and not terrorism or piracy, so everyone stopped worrying about it.
Seriously, every powerful state engages in state terrorism from time to time because they can, and the embarrassment of discovery is weighed against the benefit of eliminating a problem. France is no exception : https://en.wikipedia.org/wiki/Sinking_of_the_Rainbow_Warrior
Counter-point. France has already kidnapped another social media CEO and forced him to give up the encryption keys. The moral difference between France (historically or currently) and a 3rd wold warlord is very thin. Also, look at the accusations. CP and political extremism are the classic go-tos when a government doesn't really have a reason to put pressure on someone but they really want to anyway. France has a very questionable history of honoring rule of law in politics. Putting political enemies in prison on questionable charges has a long history there.
We are also talking about a country who wants to ban anonymous VPNs in the name of protecting the children and ask everyone to give their ID card to register account on Instagram, TikTok, etc.
The second Donald Trump threatened to invade a nation allied with France is the second anyone who works with Trump became a legitimate military target.
Like a cruel child dismembering a spider one limb at a time France and other nations around the world will meticulously destroy whatever resources people like Musk have and the influence it gives him over their countries.
If Musk displays a sufficient level of resistance to these actions the French will simply assassinate him.
You got that backwards. Greenpeace for all its faults is still viewed as a group against which military force is a no-no. Sinking that ship cost France far more than anything they inflicted on Greenpeace. If anything, that event is evidence that going after Musk is a terrible idea.
PS Yes, Greenpeace is a bunch of scientifically-illiterate fools who have caused far more damage than they prevented. Doesn't matter because what France did was still clearly against the law.
I knew someone who was involved in an investigation (the company and person was the victim not the target of the investigation), their work laptop got placed into a legal hold, the investigators had access to all of their files and they weren't allowed delete to anything (even junk emails) for several years.
If you're a database administrator or similar working at X in France, are you going to going to go to jail to protect Musk from police with an appropriate warrant for access to company data? I doubt it.
It sounds better in the news when you do a raid. These things are generally not done for any purpose other than to communicate a message and score political points.
What happened to due process? Every major firm should have a "dawn raid" policy to comply while preserving rights.
Specific to the Uber case(s), if it were illegal, then why didn't Uber get criminal charges or fines?
At best there's an argument that it was "obstructing justice," but logging people off, encrypting, and deleting local copies isn't necessarily illegal.
violent agreement is when you're debating something with someone, and you end up yelling at each other because you think you disagree on something, but then you realize that you (violently, as in "are yelling at each other") agree on whatever it is. Agressive compliance is when the corporate drone over-zealously follows stupid/pointless rules when they could just look the other way, to the point of it being aggressively compliant (with stupid corporate mumbo jumbo).
This is a perfect way for the legal head of the company in-country to visit some jails.
They will explain that it was done remotely and whatnot but then the company will be closed in the country. Whether this matters for the mothership is another story.
That sounds awfully difficult to do perfectly without personally signing up for extra jail time for premeditated violation of local laws. Like in that scenario, any reference to the unsanitized file or a single employee breaking omertà is proof that your executives and IT staff conspired to violate the law in a way which is likely to ensure they want to prosecute as maximally as possible. Law enforcement around the world hates the idea that you don’t respect their authority, and when it slots into existing geopolitics you’d be a very tempting scapegoat.
Elon probably isn’t paying them enough to be the lightning rod for the current cross-Atlantic tension.
True, but that’s going to be a noisy process until there are a few theoretical breakthroughs. I personally would not leave myself legally on the hook hoping that Grok faked something hermetically.
Nobody does that. It is either cooperation with law enforcement or remote lock (and then there are consequences for the in-country legal entity, probably not personally for the head but certainly for its existence).
This was a common action during the Russian invasion of Ukraine for companies that supported Ukraine and closed their operations in Russia.
Or they just connect to a mothership with keys on the machine. The authorities can have the keys, but alas, they're useless now, because there is some employee watching the surveillance cameras in the US, and he pressed a red button revoking all of them. What part of this is illegal?
Obviously, the government can just threaten to fine you any amount, close operations or whatever, but your company can just decide to stop operating there, like Google after Russia imposed an absurd fine.
You know police are not all technically clueless, I hope. The French have plenty of experience dealing with terrorism, cybercrime, and other modern problems as well as the more historical experience of being conquered and occupied, I don't think it's beyond them to game out scenarios like this and preempt such measures.
As France discovered the hard way in WW2, you can put all sorts of rock-solid security around the front door only to be surprised when your opponent comes in by window.
They do have some physical records, but it would be mostly investigators producing a warrant and forcing staff to hand over administrative credentials to allow forensic data collection.
I assume that they have opened a formal investigation and are now going to the office to collect/perloin evidence before it's destroyed.
Most FAANG companies have training specifically for this. I assume X doesn't anymore, because they are cool and edgy, and staff training is for the woke.
That can start with self deleting messages if you are under court order, and has happens before:
“Google intended to subvert the discovery process, and that Chat evidence was ‘lost with the intent to prevent its use in litigation’ and ‘with the intent to deprive another party of the information’s use in the litigation.’”
Right, but you are confusing a _conspiracy_ with staff training.
I didn't work anywhere near the level, or anything thats dicey where I needed to have a "oh shit delete everything the Feds are here" plan. Which is a conspiracy to pervert the course of justice (I'm not sure what the common law/legal code name for that is)
The stuff I worked on was legal and in the spirit of the law, along with a paper trail (that I also still have) proving that.
>withholding evidence from the prosecution, you are going to jail if you follow.
Prosecution must present a valid search warrant for *specific* information. They don't get a carte blanche, so uber way is correct. lock computers and lets the courts to decide.
In common law/4th amendment, kinda. Once you have a warrant, then the word reasonable comes into play. Its reasonable to assume that the data you want is on the devices of certain people. if incidental data/evidence is also procured that was reasonably likely to contain said data, then its fair game
In the civil code, its quite possibly different. The french have had ~ 3 constitutions in the last 80 years. The also dont have the concept of case history. who knows what the law actually is.
mine had a scene where some bro tried to organise the resistance. A voice over told us that he was arrested for blocking a legal investigation and was liable for being fired due to reputational damage.
X's training might be like you described, but everywhere else that is vaguely beholden to law and order would be opposite.
> Seems like you'd want to subpoena source code or gmail history or something like that.
This would be done in parallel for key sources.
There is a lot of information on physical devices that is helpful, though. Even discovering additional apps and services used on the devices can lead to more discovery via those cloud services, if relevant.
Physical devices have a lot of additional information, though: Files people are actively working on, saved snippets and screenshots of important conversations, and synced data that might be easier to get offline than through legal means against the providers.
In outright criminal cases it's not uncommon for individuals to keep extra information on their laptop, phone, or a USB drive hidden in their office as an insurance policy.
This is yet another good reason to keep your work and personal devices separate, as hard as that can be at times. If there's a lawsuit you don't want your personal laptop and phone to disappear for a while.
Sure it might be on the device, but they would need a password to decrypt the laptop's storage to get any of the data. There's also the possibility of the MDM software making it impossible to decrypt if given a remote signal. Even if you image the drive, you can't image the secure enclave so if it is wiped it's impossible to retrieve.
> Sure it might be on the device, but they would need a password to decrypt the laptop's storage to get any of the data.
In these situations, refusing to provide those keys or passwords is an offense.
The employees who just want to do their job and collect a paycheck aren’t going to prison to protect their employer by refusing to give the password to their laptop.
The teams that do this know how to isolate devices to avoid remote kill switches. If someone did throw a remote kill switch, that’s destruction of evidence and a serious crime by itself. Again, the IT guy isn’t going to risk prison to wipe company secrets.
I read somewhere that Musk (or maybe Theil) companies have processes in place to quickly offload data from a location to other jurisdictions (and destroy the local data) when they detect a raid happening. Don't know how true it is though. The only insight I have into their operations was the amazing speed by which people are badged in and out of his various gigafactories. It "appears" that they developed custom badging systems when people drive into gigafactories to cut the time needed to begin work. If they are doing that kind of stuff then there has got to be something in place for a raid. (This is second hand so take with a grain of salt)
EDIT: It seems from other comments that it may have been Uber I was reading about. The badging system I have personally observed outside the Gigafactories. Apologies for the mixup.
Everyone defines their own moral code and trusts that more than the laws of the land. Don't tell me you've never gone over the speed limit, or broken one of the hundreds of crazy laws people break in everyday life out of ignorance.
The speed limit is not a law the same way "don't murder" is a law. And "don't destroy evidence of a crime" is a lot closer to "don't murder", legally speaking.
"Summons for voluntary interviews on April 20, 2026, in Paris have been sent to Mr. Elon Musk and Ms. Linda Yaccarino, in their capacity as de facto and de jure managers of the X platform at the time of the events,
An "audition en tant que témoin libre" is more or less the way for an investigation to give a chance to give their side of the story. Musk is not likely to be personally tried here.
Still, stoner-cultures in many countries in Europe celebrate 4-20, definitively a bunch of Frenchies getting extra stoned that day. It's probably the de-facto "international cannabis day" in most places in the world, at least the ones influenced by US culture which reached pretty far in its heyday.
>The Paris prosecutor's office said it launched the investigation after being contacted by a lawmaker alleging that biased algorithms in X were likely to have distorted the operation of an automated data processing system.
I'm not at all familiar with French law, and I don't have any sympathy for Elon Musk or X. That said, is this a crime?
Distorted the operation how? By making their chatbot more likely to say stupid conspiracies or something? Is that even against the law?
> The first two points of the official document, which I re-quote below, are about CSAM.
Sorry, but that's a major translation error. "pédopornographique" properly translated is child porn, not child sexual abuse material (CSAM). The difference is huge.
Maybe US law makes a distinction, but in Europe there is no difference. Sexual depictions of children (real or not) is considered child pornography and will get you sent to the slammer.
> The term “child pornography” is
currently used in federal statutes and
is defined as any visual depiction of
sexually explicit conduct involving a
person less than 18 years old. While
this phrase still appears in federal
law, “child sexual abuse material” is
preferred, as it better reflects the
abuse that is depicted in the images
and videos and the resulting trauma
to the child. In fact, in 2016, an
international working group,
comprising a collection of countries
and international organizations
working to combat child exploitation,
formally recognized “child sexual
abuse material” as the preferred term.
> I'm not at all familiar with French law, and I don't have any sympathy for Elon Musk or X. That said, is this a crime?
GDPR and DMA actually have teeth. They just haven't been shown yet because the usual M.O. for European law violators is first, a free reminder "hey guys, what you're doing is against the law, stop it, or else". Then, if violations continue, maybe two or three rounds follow... but at some point, especially if the violations are openly intentional (and Musk's behavior makes that very very clear), the hammer gets brought down.
Our system is based on the idea that we institute complex regulations, and when they get introduced and stuff goes south, we assume that it's innocent mistakes first.
And in addition to that, there's the geopolitical aspect... basically, hurt Musk to show Trump that, yes, Europe means business and has the means to fight back.
As for the allegations:
> The probe has since expanded to investigate alleged “complicity” in spreading pornographic images of minors, sexually explicit deepfakes, denial of crimes against humanity and manipulation of an automated data processing system as part of an organised group, and other offences, the office said in a statement Tuesday.
The GDPR/DMA stuff just was the opener anyway. CSAM isn't liked by authorities at all, and genocide denial (we're not talking about Palestine here, calm your horses y'all, we're talking about Holocaust denial) is a crime in most European jurisdiction (in addition to doing the right-arm salute and other displays of fascist insignia). We actually learned something out of WW2.
420 is a stoner number, stoners lol a lot, thought of Elmo's failed joint smoking on JRE before I stopped watching
...but then other commenters reminded me there is another thing on the same date, which might have been more the actual troll at Elmo to get him all worked up
The merger was most likely now because they have to do it before the IPO. After the IPO, there’s a whole process to force independent evaluation and negotiation between two boards / executives, which would be an absolute dumpster fire where Musk controls both.
How was that move legal anyway? Like... a lot of people and governments gave Musk money to develop, build and launch rockets. And now he's using it to bail out his failing social media network and CSAM peddling AI service.
Money comes with strings, such as when forming an ongoing relationship with a company you expect them to not merge with other companies you are actively prosecuting. I suspect the deal is going so fast to avoid some sort of veto being prepared. Once SpaceX and xAI are officially the same, you lose the ability to inflict meaningful penalties on xAI without penalizing yourself as an active business partner with SpaceX.
I'm sure it's comforting to believe that people you disagree with do so for silly reasons, but many people will support this just because we like the rule of law.
I think there's a difference between "user uploaded material isn't properly moderated" and "the sites own chatbot generates porn on request based on images of women who didn't agree to it", no?
> The prosecutor's office also said it was leaving X and would communicate on LinkedIn and Instagram from now on.
I mean, perhaps it's time to completely drop these US-owned, closed-source, algo-driven controversial platforms, and start treating the communication with the public that funds your existence in different terms. The goal should be to reach as many people, of course, but also to ensure that the method and medium of communication is in the interest of the public at large.
I agree with you. In my opinion it was already bad enough that official institutions were using Twitter as a communication platform before it belonged to Musk and started to restrict visibility to non-logged in users, but at least Twitter was arguably a mostly open communication platform and could be misunderstood as a public service in the minds of the less well-informed. However, deciding to "communicate" at this day and age on LinkedIn and Instagram, neither of which ever made a passing attempt to pretend to be a public communications service, boggles the mind.
> official institutions were using Twitter as a communication platform before it belonged to Musk and started to restrict visibility to non-logged in users
... thereby driving up adoption far better than Twitter itself could. Ironic or what.
This. We don't have to accept that they behave that way. They enter our economies so they need to adhere to our laws. And we can fine them. No one wants to lose Europe as a market, even if all the haters call us a shithole.
>The goal should be to reach as many people, of course, but also to ensure that the method and medium of communication is in the interest of the public at large.
Who decides what communication is in the interest of the public at large? The Trump administration?
You appear to have posted a bit of a loaded question here, apologies if I'm misinterpreting your comment. It is, of course, the public that should decide what communication is of public interest, at least in a democracy operating optimally.
I suppose the answer, if we're serious about it, is somewhat more nuanced.
To begin, public administrations should not get to unilaterally define "the public interest" in their communication, nor should private platforms for that matter. Assuming we're still talking about a democracy, the decision-making should be democratically via a combination of law + rights + accountable institutions + public scrutiny, with implementation constraints that maximise reach, accessibility, auditability, and independence from private gatekeepers. The last bit is rather relevant, because the private sector's interests and the citizen's interests are nearly always at odds in any modern society, hence the state's roles as rule-setter (via democratic processes) and arbiter. Happy to get into further detail regarding the actual processes involved, if you're genuinely interested.
That aside - there are two separate problems that often get conflated when we talk about these platforms:
- one is reach: people are on Twitter, LinkedIn, Instagram, so publishing there increases distribution; public institutions should be interested in reaching as many citizens as possible with their comms;
- the other one is dependency: if those become the primary or exclusive channels, the state's relationship with citizens becomes contingent on private moderation, ranking algorithms, account lockouts, paywalls, data extraction, and opaque rule changes. That is entirely and dangerously misaligned with democratic accountability.
A potential middle position could be ti use commercial social platforms as secondary distribution instead of the authoritative channel, which in reality is often the case. However, due to the way societies work and how individuals operate within them, the public won't actually come across the information until it's distributed on the most popular platforms. Which is why some argue that they should be treated as public utilities since dominant communications infrastructure has quasi-public function (rest assured, I won't open that can of worms right now).
Politics is messy in practice, as all balancing acts are - a normal price to pay for any democratic society, I'd say. Mix that with technology, social psychology and philosophies of liberty, rights, and wellbeing, and you have a proper head-scratcher on your hands. We've already done a lot to balance these, for sure, but we're not there yet and it's a dynamic, developing field that presents new challenges.
All I've seen is Elon tried to invite himself to the "wild parties" and they told him he couldn't come and that they weren't doing them anymore lol. It's possible he went but, from what I've seen, he wasn't ever invited.
Who knows who did what on this island, and I hope we'll figure it out. But in the meantime, going to this island or/and being friend with Epstein doesn't automatically make someone a pedo or rapist.
No, but they all knew he was a pedo/rapist, and were still friends with him and went to the island of a pedo/rapist, and introduced the pedo/rapist to their friends...
We don't know how many were pedo/rapists, but we know all of them liked to socialize with one and trade favours and spread his influence.
Neither does your wife divorcing you at about the same time things started to go through legal process...
Oops... yeah, in retrospect it was even worse... no... you can and should be judged by the friends you keep and hang-out with... The same ones who seem to be circling the wagons with innocuous statements or attempts to find other scapegoats (DARVO)... hmm, what was that quote again:
"We must all hang together or we will all hang separately"
You know the flight logs are public record and have been for a decade right? We know (and have known for awhile), exactly who was and wasn't there. Who was there: Obama, Bill Clinton, and Bill Gates (his frequency of visits cost him his marriage). Who wasn't there? Trump and Elon because at the time they weren't important enough to get an invite. All of this is a matter of public record.
Elon Musk has his own planes, he would not have needed a ride had Epstein invited him. Recently released emails also show people (like commerce secretary Howard Lutnick, who asserted at great length last year that he hadn't had any contact with Epstein since meeting him in 2005) arranging to visit Epstein at his island and taking their own yacht over there.
CSAM does not have a universal definition. In Sweden for instance, CSAM is any image of an underage subject (real or realistic digital) designed to evoke a sexual response. If you take a picture of a 14 year old girl (age of consent is 15) and use Grok to give her bikini, or make her topless, then you are most definately producing and possessing CSAM.
> If you take a picture of a 14 year old girl (age of consent is 15) and use Grok to give her bikini, or make her topless, then you are most definately producing and possessing CSAM.
> No abuse of a real minor is needed.
Even the Google "AI" knows better than that. CSAM "is considered a record of a crime, emphasizing that its existence represents the abuse of a child."
Putting a bikini on a photo of a child may be distasteful abuse of a photo, but it is not abuse of a child - in any current law.
" Strange that there was no disagreement before "AI", right? Yet now we have a clutch of new "definitions" all of which dilute and weaken the meaning. "
Are you from Sweden? Why do you think the definition was clear across the world and not changed "before AI"? Or is it some USDefaultism where Americans assume their definition was universal?
"No. I used this interweb thing to fetch that document from Sweden, saving me a 1000-mile walk."
So you cant speak Swedish, yet you think you grasped the Swedish law definition?
" I didn't say it was clear. I said there was no disagreement. "
Sorry, there are lots of different judical definitions about CSAM in different countries, each with different edge cases and how to handle them. I very doubt it, there is a disaggrement.
But my guess about your post is, that an American has to learn again there is a world outside of the US with different rules and different languages.
> So you cant speak Swedish, yet you think you grasped the Swedish law definition?
I guess you didn't read the doc. It is in English.
I too doubt there's material disagreement between judicial definitions. The dubious definitions I'm referring to are the non-judicial fabrications behind accusations such as the root of this subthread.
> Even the Google "AI" knows better than that. CSAM "is [...]"
Please don't use the "knowledge" of LLMs as evidence or support for anything. Generative models generate things that have some likelihood of being consistent with their input material, they don't "know" things.
Just last night, I did a Google search related to the cell tower recently constructed next to our local fire house. Above the search results, Gemini stated that the new tower is physically located on the Facebook page of the fire department.
Does this support the idea that "some physical cell towers are located on Facebook pages"? It does not. At best, it supports that the likelihood that the generated text is completely consistent with the model's input is less than 100% and/or that input to the model was factually incorrect.
It has been since at least 2012 here in Sweden. That case went to our highest court and they decided a manga drawing was CSAM (maybe you are hung up on this term though, it is obviously not the same in Swedish).
The holder was not convicted but that is besides the point about the material.
> Även en bild där ett barn t.ex. genom speciella kameraarrangemang
framställs på ett sätt som är ägnat att vädja till sexualdriften, utan att
det avbildade barnet kan sägas ha deltagit i ett sexuellt beteende vid
avbildningen, kan omfattas av bestämmelsen.
Which translated means that the children does not have to be apart of sexual acts and indeed undressing a child using AI could be CSAM.
I say "could" because all laws are open to interpretation in Sweden and it depends on the specific image. But it's safe to say that many images produces by Grok are CSAM by Swedish standards.
That's the problem with CSAM arguments, though. If you disagree with the current law and think it should be loosened, you're a disgusting pedophile. But if you think it should be tightened, you're a saint looking out for the children's wellbeing. And so laws only go one way...
You don't see a huge difference between abusing a child (and recording it) vs drawing/creating an image of a child in a sexual situation? Do you believe they should have the same legal treatment? In Japan for instance the latter is legal.
He made no judgement in his comment, he just observed the fact that the term csam - in at least the specified jurisdiction - applies to generated pictures of teenagers, wherever real people were subjected to harm or not.
I suspect none of us are lawyers with enough legal knowledge of the French law to know the specifics of this case
This comment is a part of the chain that starts with a very judgemental comment and is an answer to a response challenging that starting one. You don't need legal knowledge of the French law to want to distinguish real child abuse from imaginary. One can give arguments why the latter is also bad, but this is not an automatic judgment, should not depend on the laws of a particular country and I, for one, am deeply shocked that some could think it's the same crime of the same severity.
Are you implying that it's not abuse to "undress" a child using AI?
You should realize that children have committed suicide before because AI deepfakes of themselves have been spread around schools. Just because these images are "fake" doesn't mean they're not abuse, and that there aren't real victims.
When you undress a child with AI, especially publicly on Twitter or privately through DM, that child is abused using the material the AI generated. Therefore CSAM.
Musk's social media platform has recently been subject to intense scrutiny over sexualised images generated and edited on the site using its AI tool Grok.
if a user uses a tool to break the law it's on the person who broke the law not the people who made the tool. knife manufacturers aren't to blame if someone gets stabbed right?
This seems different. With a knife the stabbing is done by the human. That would be akin to a paintbrush or camera or something being used to create CSAM.
Here you have a model that is actually creating the CSAM.
It seems more similar to a robot that is told to go kill someone and does so. Sure, someone told the robot to do something, but the creators of the robot really should have to put some safeguards to prevent it.
Text on the internet and all of that, but you should have added the "/s" to the end so people didn't think you were promoting this line of logic seriously.
If a knife manufacturer constructs an apparatus wherein someone can simply write "stab this child" on a whim to watch a knife stab a child, that manufacturer would in fact discover they are in legal peril to some extent.
I mean, no one's ever made a tool who's scope is "making literally anything you want," including, apparently CSAM. So we're in a bit of uncharted waters, really. Like mostly, no I would agree, it's a bad idea to hold the makers of a tool responsible for how it's used. And, this is an especially egregious offense on the part of said tool-maker.
Like how I see this is:
* If you can't restrict people from making kiddie porn with Grok, then it stands to reason at the very least, access to Grok needs to be strictly controlled.
* If you can restrict that, why wasn't that done? It can't be completely omitted from this conversation that Grok is, pretty famously, the "unrestrained" AI, which in most respects means it swears more, quotes and uses highly dubious sources of information that are friendly to Musk's personal politics, and occasionally spouts white nationalist rhetoric. So as part of their quest to "unwoke" Grok did they also make it able to generate this shit too?
This is really amusing to watch, because everything that Grok is accused of is something which you can also trigger in currently available open-weight models (if you know what you're doing).
There's nothing special about Grok in this regard. It wasn't trained to be a MechaHitler, nor to generate CSAM. It's just relatively uncensored[1] compared to the competition, which means it can be easily manipulated to do what the users tell it to, and that is biting Musk in the ass here.
And just to be clear, since apparently people love to jump to conclusions - I'm not excusing what is happening. I'm just pointing out the fact that the only special thing about Grok is that it's both relatively uncensored and easily available to a mainstream audience.
I'm not talking about video editing software; that's a different class of software. I'm talking about other generative AI models, which you can download today onto your computer, and have it do the same thing as Grok does.
> How is this exoneration?
I don't know; you tell me where I said it was? I'm just stating a fact that Grok isn't unique here, and if you want to ban Grok because of it then you need to also ban open weight models which can do exactly the same thing.
Well you could not sue the video-editing software for someone making child pornography with it. You would, quite sanely, go after the pedophiles themselves.
Maybe tying together an uncensored AI model and a social network just isn't something that's ethical / should be legal to do.
There are many things where each is legal/ethical to provide, and where combining them might make business sense, but where we, as a society have decided to not allow combining them.
No. I'm just saying that people should be consistent and if they apply a certain standard to Grok then they should also apply the same standard to other things. Be consistent.
Meanwhile what I commonly see is people dunking on anything Musk-related because they dislike him, but give a free pass on similar things if it's not related to him.
Every island is capable of hosting pedophiles, but they don't. The one island that's famous for pedos is the one Musk wanted to be invited to. Find me more pedo islands, I'll dunk on them too very consistently. Whether it's AI with CSAM or islands with pedos, Musk is definitely consistent.
You cannot build a CSAM generator, period. CSAM means Child Sexual Abuse Material -- material created through the sexual abuse of children. If it came out of a generator, it is not, by definition, Child Sexual Abuse Material.
Every AI system is capable of generating CSAM and deep fakes if requested by a savvy user. The only thing this proves is that you can't upset the French government or they'll go on a fishing expedition through your office praying to find evidence of a crime.
>Every AI system is capable of generating CSAM and deep fakes if requested by a savvy user.
There is no way this is true, especially if the system is PaaS only. Additionally, the system should have a way to tell if someone is attempting to bypass their safety measures and act accordingly.
Grok brought that thought all the way to "... so let's not even try to prevent it."
The point is to show just how aware X were of the issue, and that they chose to repeatedly do nothing against Grok being used to create CSAM and probably other problematic and illegal imagery.
I can't really doubt they'll find plenty of evidence during discovery, it doesn't have to be physical things. The raid stops office activity immediately, and marks the point in time after which they can be accused of destroying evidence if they erase relevant information to hide internal comms.
Grok does try to prevent it. They even publicly publish their safety prompt. It clearly shows they have disallowed the system from assisting with queries that create child sexual abuse material.
The fact that users have found ways to hack around this is not evidence of X committing a crime.
If AI GF Generator 9001 is producing unwilling deepfake pornography of real people, especially if of children, feel free to raid their offices as well.
>Every AI system is capable of generating CSAM and deep fakes if requested by a savvy user. The only thing this proves is that you can't upset the French government or they'll go on a fishing expedition through your office praying to find evidence of a crime.
If every AI system can do this, and every AI system in incapable of preventing it, then I guess every AI system should be banned until they can figure it out.
Every banking app on the planet "is capable" of letting a complete stranger go into your account and transfer all your money to their account. Did we force banks to put restrictions in place to prevent that from happening, or did we throw our arms up and say: oh well the French Government just wants to pick on banks?
In my opinion I think the reason they raided the offices for CSAM would be there are laws on the books for CSAM and not for social manipulation. If people could be jailed for manipulation there would be no social media platforms, lobbyists, political campaign groups or advertisements. People are already being manipulated by AI.
On a related note given AI is just a tool and requires someone to tell it to make CSAM I think they will have to prove intent possibly by grabbing chat logs, emails and other internal communications but I know very little about French law or international law.
hold on, are you saying that you should be able to be jailed for manipulation? Where would that end? could i be jailed if i post a review for a restaurant if you feel it manipulated you? or anyone stating an opinion could be construed as manipulation. that is beyond a slippery slope, that is an authoritarian nightmare.
I believe the context I was proposing would be at the scale of world-wide manipulation. Rigging elections and such. There is a Netflix documentary called "The Great Hack" that gets into what I am discussing though from the perspective of social media algorithm. This only gets more effective when people are chatting with an AI bot that mimics a human and they think is their significant other that laughs at all their jokes and strokes their ego.
I think your interpretation would be more along the line of making 1984, Brave New World, Fahrenheit 451 and The Handmaid's Tale a reality.
Yeah i get that. I just hesitate to give any government even more power than they do now to silence people, which they would definitely use any law like that to do.
I will have to check that out, it sounds interesting. It was also pretty obvious how all the social media companies pushed the same narrative through COVID.
I don't like how these social networks and the media try to manipulate things but I don't think giving the government even more power will fix anything. It will probably make it worse. I think even if you had those laws on the books, you would still get manipulation through selective enforcement.
I think the only solution is education and individuals saying no to these platforms' and their algorithmic feeds. I think we are already seeing a growing movement towards people either not using social media or using it way less than they did previously. I know for me personally, I use X but only follow tech people i like and only look at the "following" tab. It is a much better experience than the "for you" tab
The TV station thing, talking about the US here, only applies to broadcast TV and it is a condition of getting their a frequency allotment from the government.
No, i am not saying that it is the same. I am saying that it would start as "We are just going after the tech companies" but if you give the government an inch they will take a mile. They would take that and expand upon the hate speech stuff you are already see around the world as an excuse to arrest whoever they wanted.
I am a free market person, so i think these sites are providing something to the market that people like or they wouldn't be there. If you wanted to rein them in, fine but you have to be careful how you word stuff or it gets pretty scary pretty quickly.
Hate speech laws exist in most of Europe and they are not abused at all. And it's not like media wouldn't already have a bunch of laws applied to it, even in the US - e.g. libel and the like. Surely you can slippery slope with that as well, right?
And the free market only works if there is a well-defined market with proper laws that are upheld. Otherwise it's a running competition where Meta/X just shoot every other competitor at the start and drive to the goal with a car. This has been known by Adam Smith already - you can't be a "free market person" while being happy with these giga-corporations trampling on laws left and right.
>French authorities opened their investigation after reports from a French lawmaker alleging that biased algorithms on X likely distorted the functioning of an automated data processing system. It expanded after Grok generated posts that allegedly denied the Holocaust, a crime in France, and spread sexually explicit deepfakes, the statement said.
I had to make a choice to not even use Grok (I wasn't overly interested in the first place, but wanted to review how it might compare to the other tools), because even just the Explore option shows photos and videos of CSAM, CSAM-adjacent, and other "problematic" things in a photorealistic manner (such as implied bestiality).
Looking at the prompts below some of those image shows that even now, there's almost zero effort at Grok to filter prompts that are blatantly looking to create problematic material. People aren't being sneaky and smart and wordsmithing subtle cues to try to bypass content filtering, they're often saying "create this" bluntly and directly, and Grok is happily obliging.
Given America passed PAFACA (intended to ban TikTok, which Trump instead put in hands of his friends), I would think Europe would also have a similar law. Is that not the case?
Are you talking about this [1]? I don't know the answer to your question whether or not the EU has the same policy. That is talking about control by a foreign adversary.
I think that would delve into whether or not the USA would be considered a foreign adversary to France. I was under the impression we were allies since like the 1800s or so despite some little tiffs now and again.
I am not surprised at all. Independent of whether this is true, such a decision from the EU would never be acted upon. The number of layers between the one who says "ban it" somewhere in Bruissels and the operator blackholing the DNS and filtering traffic is decades.
Why do you think that? It can take a few years for national laws bring in place, but that also depends on how much certain countries push it. Regarding Internet traffic I assume a few specific countries that route most of the traffic would be enough to stop operation for the most part.
Have you ever seen an actual EU-wide decision on such matters and an actual application?
The closest I can think of is GDPR which has its great aspects and also the cookies law (which is incorrectly interpreted). And some things like private IPs being PIIs which promotes nonsnsical "authorities notifications" that are not used afterwards.
We have consulting companies doing yearly audits on companies to close the books. And yet hacks happen all the time. Without consequences.
There is an ocean between what is announced and lives on paper vs. the reality of the application. If you work in compliance and cubersecurity you see this everyday.
They will set their DNS servers to drop all incoming connections to X. That can be done in each country. They can use Deep Packet inspection tools and go from there. If the decision is EU wide then they will roll that out.
There is no law that would permit the EU to do this. This would be a huge thing to introduce and implement, probably a 2-3 year project, and would almost certainly be strongly opposed by multiple member countries.
Simply because if you were to ban this type of platform you wouldn't need Musk to "move it towards the far right" because you would already be the very definition of a totalitarian regime.
But whatever zombie government France is running can't "ban" X anyway because it would get them one step closer to the guillotine. Like in the UK or Germany it is a tinderbox cruising on a 10-20% approval rating.
If "French prosecutor" want to find a child abuse case they can check the Macron couple Wikipedia pages.
> if you were to ban this type of platform you wouldn't need Musk to "move it towards the far right" because you would already be the very definition of a totalitarian regime
Paradox of tolerance. (The American right being Exhibit A for why trying to let sunlight disinfect a corpse doesn’t work.)
Big platforms and media are only good if they try to move the populace to the progressive, neoliberal side. Otherwise we need to put their executives in jail.
> could you clarify what the difference is between the near right and the far right?
It’s called far-right because it’s further to the right (starting from the centre) than the right. Wikipedia is your friend, it offers plenty of examples and even helpfully lays out the full spectrum in a way even a five year old with a developmental impairment could understand.
I was surprised by your claim that Wikipedia would categorize mild restrictions on immigration as an element of far-right politics, so I read that article to see it for myself. I didn't see anything about mild restrictions. Would you care to point out where you saw that?
Well, far right is a spectrum, obviously. But a party that equates immigration of a particular religion as terrorism is not "mild immigration restrictions" in my reading.
I don't know about that party, but National Rally doesn't say that, and also polls around 34% of French people. So it remains that the Wikipedia "far right" definition is a very wide spectrum.
Um, the article I posted was about the same party. The BBC considers them far-right [1], Politico considers them far-right [2], Reuters considers them far-right [3], AP News considers them far-right [4], NBC News considers them far-right [5], the New York Times considers them far-right [6], Deutsche Welle considers them far-right [7].
I don't think the Wikipedia characterization is far off a pretty commonly held sentiment. You are of course, able to disagree and consider them far-left, center, or whatever label you want.
You stated earlier that because Wikipedia called mild immigration reform far-right (which it did not to my reading, so you pointed to National Rally as an example) words don't mean anything. But words do mean things by consensus, and from my reading the consensus is that National Rally is far-right.
Of course, many far-right (and far-left) thinkers consider themselves centrists or mild, so there will be disagreement.
The article you posted said, "we just call them that because everyone else does".
But there's also an obvious semantic fail when 34% of the electorate is "far right". This means (16% - half the moderate percentage) is on the non-far right. It implies that "far" is just meaningless cant.
This is obviously diversion but anyway:
Bunch of "American and European" "patriots" that he retweets 24/7 turned out to be people from Iran, Pakistan, India and Russia. These accounts generate likes by default by accounts with "wife of vet" in bio and generic old_blonde_women.jpeg aka bots.
It's pretty obvious, media is called the 4th power.
Control the media, you control the information that a significant part of Europeans get. Elections aren't won by 50%, you only need to convince 4 or 5% of the population that the far right is great.
It gives people who aren't aware of the bot accounts / thumb on the scale the perception that insane crackpot delusions are more popular than they are.
There is a reason Musk paid so much for Twitter. If this stuff had no effect he wouldn't have bought it.
Social media should not allow algorithms to actively AMPLIFY disinformation to the public.
If people want to post disinformation that's fine, but the way that these companies push that information onto users is the problem. There either needs to be accountability for platforms or a ban on behavior driven content feeds.
People lying on the internet is fine. Social media algorithms amplifying the lie because it has high engagement is destroying our society.
The same way that social media has destabilized the USA.
By exposing people to a flood of misinformation and politically radicalizing content designed to maximize engagement via emotion (usually anger).
Remember when Elon Musk alleged that he was going to find a trillion dollars (a year) in waste fraud and abuse with DOGE? Did he ever issue a correction on that statement after catastrophically failing to do so? Do you think that kind of messaging might damage the trust in our institutions?
While there may be some feeds on Xitter that are basic algorithms, (1) it's not the only one (2) there may still be less mechanical algorithmic choices within following (what order, what mix, how much) (3) evidence to the contrary exists, are you freeing yourself of facts?
I haven't dug into whatever they open sourced about the algorithm to make definitive statements. Regardless, there are many pieces out there where you can learn about the evidence for direct manipulation.
> You can just go on the app yourself and verify this
That's not how science and statistics works. Comprehensive evidence and analysis is a search or chat bot away. The legal cases will go into the details as well, by nature of how legal proceedings work
Far right to me is advocating for things that discriminate based on protected traits like race, sex, etc. So if you’re advocating for “white culture” above others, that’s far right. If you’re advocating for the 19th amendment (women’s right to vote) to be repealed (as Nick Fuentes and similar influencers do), that’s also far right. Advocating for ICE to terrorize peaceful residents, violate constitutional rights, or outright execute people is also far right.
Near right to me is advocating for things like lower taxes or different regulations or a secure border (but without the deportation of millions who are already in the country and abiding by laws). Operating the government for those things while still respecting the law, upholding the constitution, defending civil rights, and avoiding the deeply unethical grifting and corruption the Trump administration has normalized.
Obviously this is very simplified. What are your definitions out of curiosity?
I hate to wade into this cesspool. How about some of the real obvious ones:
* Crypto currency rug pulls (World Liberty Financial)
* Donations linked with pardons (Binance)
* Pardoning failed rebels of a coup that favored him (Capitol rioters)
* Bringing baseless charges against political enemies and journalists (Comey, Letitia James, Don Lemon)
* Musk (DOGE) killing government regulatory agencies that had investigations and cases against his companies
This is with two minutes of thought while waiting for a compile. I'm open to hearing how I am wrong.
de Gaulle would be considered insanely far right today. Many aspects of Bush (assuming GW here) would be considered not in line with America's far-right today.
Assume good intent. It helps you see the actually interesting point being made.
They wrote "Bush was right wing" (unless it was edited), so what's your point in saying "Many aspects of Bush (assuming GW here) would be considered not in line with America's far-right today." ?
Even at the time Bill Clinton was already very much right-wing. When he was in power, he oversaw the destruction of public services and the introduction of neoliberalism. Is that not right-wing?
It's not just me saying this. Ask anyone who was politically active (as a leftist) in the 90s. I'm not sure what was the equivalent of the Democratic Socialists of America (center-left) at that time, but i'm sure there was an equivalent and Bill Clinton was much more right-wing. That's without mentioning actual left-wing parties (like communists, anarchists, black panthers etc).
Not a single of those three things is either left-wing or right-wing. It depends on the actual implementation.
For example, universal health-care is only left-wing if it's a public service. Taking money out of the State's pockets to finance private healthcare and pharmaceutical for-profit corporations is very much a definition of right-wing policy.
> de Gaulle would be considered insanely far right today
As much as it pains me to say this, because i myself consider de Gaulle to be a fascist in many regards, that's far from a majority opinion (disclaimer: i'm an anarchist).
I think de Gaulle was a classic right-wing authoritarian ruler. He had to take some social measures (which some may view as left-wing) because the workers at the end of WWII were very organized and had dozens of thousands of rifles, so such was the price of social peace.
He was right-wing because he was rather conservative, for private property/entrepreneurship and strongly anti-communist. Still, he had strong national planning for the economy, much State support for private industry (Elf, Areva, etc) and strong policing on the streets (see also, Service d'Action Civique for de Gaulle's fascist militias with long ties with historical nazism and secret services).
That being said, de Gaulle to my knowledge was not really known for racist fear-mongering or hate speech. The genocides he took part in (eg. against Algerian people) were very quiet and the official story line was that there was no story. That's in comparison with far-right people who already at the time, and still today, build an image of the ENEMY towards whom all hate and violence is necessary. See also Umberto Eco's Ur-fascism for characteristics of fascist regimes.
In that sense, and it really pains me to write this, but de Gaulle was much less far-right than today's Parti Socialiste, pretending to be left wing despite ruling with right-wing anti-social measures and inciting hatred towards french muslims and binationals.
While de Gaulle being far-right is not a majority opinion (except in some marginal circles), he would undoubtedly be considered far-right if he was governing today, which is what GP seems to have meant.
I think that, for most Western people today, far-right == bad to non-white people, independent of intention (as you demonstrated with your remark about the PS), so de Gaulle's approach to Algeria, whether he's loud about it or not, would qualify him as far-right already.
All this to say, the debate is based on differing definitions of far-right (for example you conflate fascism and far-right and use Eco, while GP and I seem to think it's about extremely authoritarian + capitalist), and has started from an ignorant comment by an idiot who considers Bush (someone who is responsible for the death of around a million Iraqis, the creation of actual torture camps, large-scale surveillance, etc.) not far-right because, I assume, he didn't say anything mean about African-Americans.
Believing in free speech is neither left nor right, it's on the freedom/authority axis which is perpendicular. Most people on the left never advocated to legalize libel, defamation, racist campaigns, although the minority that did still do today.
The "free-speechism" of the past you mention was about speaking truth to power, and this movement still exists on the left today, see for example support for Julian Assange, arrested journalists in France or Turkey, or outright murdered in Palestine.
When Elon Musk took over Twitter and promised free speech, he very soon actually banned accounts he disagreed with, especially leftists. Why free speech may be more and more perceived as right wing is because despite having outright criminal speech with criminal consequences (such as inciting violence against harmless individuals such as Mark Bray), billionaires have weaponized propaganda on a scale never seen before with their ownership of all the major media outlets and social media platforms, arguing it's a matter of free speech.
> The child abuse feels like a smaller problem compared to that risk.
I think we can and should all agree that child sexual abuse is a much larger and more serious problem than political leanings.
It's ironic as you're commenting about a social media platform, but I think it's frightening what social media has done to us with misinformation, vilification, and echo chambers, to think political leanings are worse than murder, rape, or child sexual abuse.
In fairness, AI-generated CSAM is nowhere near as evil as real CSAM. The reason why possession of CSAM was such a serious crime is because its creation used to necessitate the abuse of a child.
It's pretty obvious the French are deliberately conflating the two to justify attacking a political dissident.
Definitely agree on which is worse! To be clear, I'm not saying I agree with the French raid. Just that statements about severe crimes (child sexual abuse for the above poster - not AI-generated content) being "lesser problems" compared to politics is a concerning measure of how people are thinking.
It may not be worse "objectively" and in direct harm.
However - it has one big problem that is rarely discussed... Normalizing of behaviour, interests and attitudes. It just becomes a thing that Grok can do - for paid accounts, and people think - ok, "no harm, no problem"... Long-term, there will be harm. This has been demonstrated over decades of investigation of CSAM.
the thing is a lot of recent legal preceding surrounding X is about weather X fulfilled the legally required due diligence and if not what level of negligence we are speaking about
and the things about negligence which caused harm to humans (instead of e.g. just financial harm) is that
a) you can't opt out of responsibility, it doesn't matter what you put into your TOS or other contracts
b) executives which are found responsible for the negligent action of a company can be hold _personally_ liable
and independent of what X actually did Musk as highest level executive personal did
1) frequently did statements that imply gross negligence (to be clear that isn't necessary how X acted, which is the actual relevant part)
2) claimed that all major engineering decisions etc. are from him and no one else (because he love bragging about how good of an engineer he is)
This means summoning him for questioning is legally speaking a must have independent of weather you expect him to show up or not. And he probably should take it serious, even if that just means he also could send a different higher level executive from X instead.
Good and honestly it’s high time. There used to be a time when we could give corps the benefit of the doubt but that time is clearly over. Beyond the CSAM, X is a cesspool of misinformation and generally the worst examples of humanity.
I’m sure Musk is going to say this is about free speech in an attempt to gin up his supporters. It isn’t. It’s about generating and distributing non consensual sexual imagery, including of minors. And, when notified, doing nothing about it.
If anything it should be an embarrassment that France are the only ones doing this.
(it’ll be interesting to see if this discussion is allowed on HN. Almost every other discussion on this topic has been flagged…)
> If anything it should be an embarrassment that France are the only ones doing this.
As mentioned in the article, the UK's ICO and the EC are also investigating.
France is notably keen on raids for this sort of thing, and a lot of things that would be basically a desk investigation in other countries result in a raid in France.
* "implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing" - https://www.bbc.co.uk/news/articles/ce8gz8g2qnlo
* locked image generation down to paid accounts only (i.e. those individuals that can be identified via their payment details).
Have the other AI companies followed suit? They were also allowing users to undress real people, but it seems the media is ignoring that and focussing their ire only on Musk's companies...
> Have the other AI companies followed suit? They were also allowing users to undress real people
No they weren’t? There were numerous examples of people feeding the same prompts to different AIs and having their requests refused. Not to mention, X was also publicly distributing that material, something other AI companies were not doing. Which is an entirely different legal liability.
Making/distributing a photo of a non-consenting bikini-wearer is no more illegal when originated by computer in bedroom than done by camera on public beach.
The part of X’s reaction to their own publishing I’m most looking forward to seeing in slow-motion in the courts and press was their attempt at agency laundering by having their LLM generate an apology in first-person.
How is that relevant? Are you implying that being a US military contractor should make you immune to the laws of other countries that you operate in?
The onus is on the contractor to make sure any classified information is kept securely. If by raiding an office in France a bunch of US military secrets are found, it would suggest the company is not fit to have those kind of contracts.
That's one way to steal the intellectual property and trade secrets of an AI company more successful than any French LLMs. And maybe accidentally leak confidential info.
I think the grok incident/s were distasteful but I can't honestly think of a reason to ban grok and not any other AI product or even photoshop.
I barely use it these days and think adding it to twitter is pretty meh but I view this as regulators exploiting an open goal to attack the infrastructure itself rather than grok e.g. prune-juice drinking sandal wearers in britain (many of whom are now government backbenchers) absolutely despise twitter and want to ban it ever since their team lost control. Similar vibe across the rest of europe.
They have (astutely, if they realise it at least) found one of the last vaguely open/mainstream spaces for dissenting thought and are thus almost definitely plotting to shut it down. Reddit is completely captured. The right is surging dialectically at the moment but it is genuinely reliant on twitter. The centre-left is basically dead so it doesn't get the same value from bluesky / their parts of twitter.
Interesting. This is basically the second enforcement on speech / images that France has done - first was Pavel Durov @ Telegram. He eventually made changes in Telegram's moderation infrastructure and I think was allowed to leave France sometime last year.
I don't love heavy-handed enforcement on speech issues, but I do really like a heterogenous cultural situation, so I think it's interesting and probably to the overall good to have a country pushing on these matters very hard, just as a matter of keeping a diverse set of global standards, something that adds cultural resilience for humanity.
linkedin is not a replacement for twitter, though. I'm curious if they'll come back post-settlement.
The point of banning real CSAM is to stop the production of it, because the production is inherently harmful. The production of AI or human generated CSAM-like images does not inherently require the harm of children, so it's fundamentally a different consideration. That's why some countries, notably Japan, allow the production of hand-drawn material that in the US would be considered CSAM.
I'm strongly against CSAM but I will say this analogy doesn't quite hold (though the values behind it does)
Libel must be as assertion that is not true. Photoshopping or AIing someone isn't an assertion of something untrue. It's more the equivalent of saying "What if this is true?" which is perfectly legal
“ 298 (1) A defamatory libel is matter published, without lawful justification or excuse, that is likely to injure the reputation of any person by exposing him to hatred, contempt or ridicule, or that is designed to insult the person of or concerning whom it is published.
Marginal note:Mode of expression
(2) A defamatory libel may be expressed directly or by insinuation or irony
(a) in words legibly marked on any substance; or
(b) by any object signifying a defamatory libel otherwise than by words.”
It doesn't have to be an assertion, or even a written statement.
> The point of banning real CSAM is to stop the production of it, because the production is inherently harmful. The production of AI or human generated CSAM-like images does not inherently require the harm of children, so it's fundamentally a different consideration.
Quite.
> That's why some countries, notably Japan, allow the production of hand-drawn material that in the US would be considered CSAM.
"Child sexual abuse material (CSAM) is not “child pornography.” It’s evidence of child sexual abuse—and it’s a crime to create, distribute, or possess. "
Durov was held on suspicion Telegram was willingly failing to moderate its platform and allowed drug trafficking and other illegal activities to take place.
X has allegedly illegally sent data to the US in violation of GDPR and contributed to child porn distribution.
Note that both are directly related to direct violation of data safety law or association with a separate criminal activities, neither is about speech.
CSAM was the lead in the 2024 news headlines in the French prosecution of Telegram also. I didn't follow the case enough to know where they went, or what the judge thought was credible.
From a US mindset, I'd say that generation of communication, including images, would fall under speech. But then we classify it very broadly here. Arranging drug deals on a messaging app definitely falls under the concept of speech in the US as well. Heck, I've been told by FBI agents that they believe assassination markets are legal in the US - protected speech.
Obviously, assassinations themselves, not so much.
"I've been told by FBI agents that they believe assassination markets are legal in the US - protected speech."
I don't believe you. Not sure what you mean by "assassination markets" exactly, but "Solicitation to commit a crime of violence" and "Conspiracy to murder" are definitely crimes.
An assassination market, at least the one we discussed, works like this - One or more people put up a bounty paid out on the death of someone. Anyone can submit a (sealed) description of the death. On death, the descriptions are opened — the one closest to the actual circumstances is paid the bounty.
One of my portfolio companies had information about contributors to these markets — I was told by my FBI contact when I got in touch that their view was the creation of the market, the funding of the market and the descriptions were all legal — they declined to follow up.
Durov wasn't arrested because of things he said or things that were said on his platform, he was arrested because he refused to cooperate in criminal investigations while he allegedly knew they were happening on a platform he manages.
If you own a bar, you know people are dealing drugs in the backroom and you refuse to assist the police, you are guilty of aiding and abetting. Well, it's the same for Durov except he apparently also helped them process the money.
>but I do really like a heterogenous cultural situation, so I think it's interesting and probably to the overall good to have a country pushing on these matters very hard
Censorship increases homogeneity, because it reduces the amount of ideas and opinions that are allowed to be expressed. The only resilience that comes from restricting people's speech is resilience of the people in power.
You were downvoted -- a theme in this thread -- but I like what you're saying. I disagree, though, on a global scale. By resilience, I mean to reference something like a monoculture plantation vs a jungle. The monoculture plantation is vulnerable to anything that figures out how to attack it. In a jungle, a single plant or set might be vulnerable, but something that can attack all the plants is much harder to come by.
Humanity itself is trending more toward monoculture socially; I like a lot of things (and hate some) about the cultural trend. But what I like isn't very important, because I might be totally wrong in my likes; if only my likes dominated, the world would be a much less resilient place -- vulnerable to the weaknesses of whatever it is I like.
So, again, I propose for the race as a whole, broad cultural diversity is really critical, and worth protecting. Even if we really hate some of the forms it takes.
Telegram isn't encrypted. For all the marketing about security, it has none, apart from TLS, and an optional "secret chat" feature that you have to explicitly select, only works with 2 participants and doesn't work very well.
They can read all messages, so they don't have an excuse for not helping in a criminal case. Their platform had a reputation of being safe for crime, which is because they just... ignored the police. Until they got arrested for that. They still turn a blind eye but not to the police.
ok thank you! I did not know that, I'm ashamed to admit! sort of like studying physics at university a decade later forgetting V=IR when I actually needed it for some solar install. I took "technical hiatus" about 5 years and recently coming back.
Anyway cut to the chase, I just checked out Mathew Greens post on the subject, he is on my list of default "trust what he says about cryptography" along with some others like djb, nadia henninger etc
Embarrased to say I did not realise, I should of known! 10+ years ago I used to lurk the IRC dev chans of every relevant cypherpunk project, including of text secure and otr-chat when I saw signal being made and before that was witnessing chats with devs and ian goldberg and stuff, I just assumed Telegram was multiparty OTR,
OOPS!
Long winded post because that is embarrassing (as someone who studied cryptography undergrad in 2009 mathematics, 2010 did postgrad wargames and computer security course and worse - whose word once about 2012-2013 was taken on these matters by activists, journalists, researchers with pretty knarly threat model - like for instance - some guardian stories and former researcher into torture - i'm also the person that wrote the bits of 'how to hold a crypto party' that made it a protocol without an organisation and made clear the threat model was anyone could be there, oops oops oops
Yes thanks for letting me know I hang my head in shame for missing that one or some how believing that one without much investigation, thankfully it was just my own personal use to contact like friend in the states where they aren't already on signal etc.
Anyway as they say "use it or lose it" yeah my assumptions here no longer valid or considered to have educated opinion if I got something that basic wrong.
In November 2012, Epstein sent Musk an email asking “how many people will you be for the heli to island”.
“Probably just Talulah and me. What day/night will be the wildest party on your island?” Musk replied, in an apparent reference to his former wife Talulah Riley.
... Eh? This isn't about Musk's association with Epstein, it's about his CSAM generating magic robot (and also some other alleged dodgy practices around the GDPR etc).
- you are thinking about a company doing good things the right way. You are thinking about a company abiding by the law, storing data on its own server, having good practices, etc.
The moment a company starts to do dubious stuff then good practices start to go out the window. People write email with cryptic analogies, people start deleting emails, ... then as the circumvention become more numerous and complex, there needs to still be a trail in order to remain understandable. That trail will be in written form somehow and that must be hidden. It might be paper, it might be shadow IT but the point is that if you are not just forgetting to keep track of coffee pods at the social corner, you will leave traces.
So yes, raids do make sense BECAUSE it's about recurring complex activities that are just too hard to keep in the mind of one single individual over long periods of time.
"Shocking Grok images"... really? It's AI. We know AI can make any image. The images are nothing but fake digital paintings that lose all integrity as quickly as they're generated.
Beyond comedic kicks for teenage boys, they're inconsequential for everyone else. But nevermind that, hand me a pitchfork and pre-fabricated sign and point me to the nearest anti-Grok protest.
Grok is a platform that is enabling this en masse. If xAI can't bring in guardrails or limit who can access these capabilities, then they deserve what's coming to them.
https://x.com/elonmusk/status/2011432649353511350
It sounds like they are following due process.
You should want to protect all of the people in your life from such a thing or nobody.
Have we a outsourced all accountability for the crimes of humans to AI now?
Those boys absolutely should be held accountable. But I also don't think that Grok should be able to quickly and easily generate fake revenge porn for minors.
[1] https://www.nbcnewyork.com/news/national-international/girl-...
And the AI is at fault for this sentencing, not the school authorities/prosecutors/judges dishing justice? WTF.
How is this an AI problem and not a legal system problem?
I don't accept this as good faith argumentation nor does HN rules.
The school authorities messed up.
Both are accuntable.
Correction: kids made the pictures. Using Grok as the tool.
If kids were to "git gud" at photoshop and use that to make nudes, would you arrest Adobe?
If kids ask a newspaper vendor for cigarettes and he provides them .. that's a no-no.
If kids ask a newspaper vendor for nudes and he provides them .. that's a no-no.
If kids ask Grok for CSAM and it provides them .. then ?
Meanwhile, the existence/creation CSAM of actual people isn't legal, for anyone no matter the age.
That's actually a good argument. And that's how the UK ending up banning not just guns, but all sorts of swords, machetes and knives, meanwhile the violent crime rates have not dropped.
So maybe dangerous knives are not the problem, but the people using them to kill other people. So then were do we draw the line between lethal weapons and crime correlation. At which instruments? Same with software tools, that keep getting more powerful with time lowering the bar to entry for generating nudes of people. Where do we draw the line on which tools are responsible for that?
No. That is not how AI nowdays works. Kids told the tool what they want and the tool understood and could have refused like all the other models - but instead it delivered. And it only could do so because it was specifically trained for that.
"If kids were to "git gud" at photoshop "
And what is that supposed to mean?
Adobe makes general purpose tools as far as I know.
Banning AI doesn't stop the damage from occurring. Bullies at school/college have been harassing their victims, often to suicide for decades/centuries before AI.
Edit for the addition of the line about bullying: "Bullying has always happened, therefore we should allow new forms of even worse bullying to flourish freely, even though I readily acknowledge that it can lead to victims committing suicide" is a bizarre and self-contradictory take. I don't know what point you think you're making.
Education might be so disrupted you have to change schools.
The crime is creating a system that lets schoolboys create fake nudes of other minors.
You don't just get to build a CSAM-generator and then be like "well I never intended for it to be used...".
The humans running a company are liable for the product that their company builds, easy as that.
So like Photoshop? Do you want to raid Adobe's HQ?
It is a privately controlled public-facing group chat. Being a chat-medium does not grant you the same rights as being a person. France isn't America.
If a company operates to the detriment and against the values of a nation, e.g. not paying their taxes or littering in the environment, the nation will ask them to change their behavior.
If there is a conspiracy of contempt, at some point things escalate.
Plus, how do you even judge the age of AI generated fake people to say it's CP? Reminds me when UK activists were claiming Grok's anime girl avatar was a minor and deserved to be considered CP, when she had massive tits that no kid has. So how much of this is just a political witch-hunt looking for any reason to justify itself?
Also, it seems pretty likely that Musk is tangled up with the Epstein shit. First Musk claimed he turned down offer to go to the island. Now it turns out Musk repeatedly sought to visit, including wanting to know when the "wildest" party was happening, after Epstein was already known as a child sex abuser. Musk claimed that Epstein had never been given a tour of SpaceX but it turns out he did in 2013. It's the classic narcissistic "lie for as long as possible" behaviour. Will be interesting to see what happens as more is revealed.
No i said no such thing, what I said was that the resources of authorities is a finite pie. If most of it goes towards petty stuff like corporate misbehavior that hurts nobody, there won't be enough for the grave crimes like actual child abuser that actually hurt real people.
Same how police won't bother with your stolen phone/bike because they have bigger crimes to catch. I'm asking for the same logic be applied here.
THat's like the 1993 moral panic that video games like Doom cause mass shootings, or the 1980's mass panic that metal music causes satanist, or the 1950s moral panic that superhero comic book violence leads to juvenile delinquency. Politicians are constantly looking for an external made up enemy to divert attention to from the real problems.
People like Epstein and mass woman/child exploitation have existed for thousands of years in the past, and will exist thousands of years in the future. It's part of the nature of the rich and powerful to execute on their deranged fetishes, it's been documented in writing since at least the Roman and Ottoman empires.
Hell, I can guarantee you there's other Epsteins operating in the wild right now, that we haven't heard of (yet), it's not like he was in any way unique. I can also guarantee you that 1 in 5-10 normal looking people you meet daily on the street have similar deranged desires as the guests on Epstein's island but can't execute on them because they're not as rich and influential to get away with it, but they'd do it if they could.
Apart from doom wasn't producing illegal content.
the point is that grok is generating illegal content for those jurisdictions. In france you can't generate CSAM, in the UK you can't distribute CSAM. Those are actual laws with legal tests, none of them need to be of actual people, they just need to depict _children_ to be illegal.
Moral panics require new laws to enforce, generally. This is just enforcing already existing laws.
More over, had it been any other site, it would have been totally shut down by now and the servers impounded. Its only because musk is close to trump and rich that he's escaped the fate than you or I would have had if we'd done the same.
Sure but where's the proof that Grok is actually producing illegal content? I searched for news sources, but they're just all parroting empty accusations not concrete documented cases.
Do you guys even hear yourselves?
Some of those might still try.
And what does AI have to do with this? Haven't child predators existed before AI?
Where's the proof that AI produces more child predators?
You're just going in circles without any arguments.
> Another line of reasoning is that with more fake CP it is more difficult to research the real CP hunt down the perpetrators and consequently save children.
(own quote)
Yes, the predators existed before AI, but also:
> I think the reasoning is that the AI contributes to more offenders (edited).
(own quote, edited)
To be clear, I don't think this line of reasoning is entirely convincing, but apparently some people do.
https://www.reuters.com/world/uk/starmers-government-aids-po...
Unlike the US administration which seems to be fine with what epstein and X are doing
The UK's "investigation" is a farce.
No platform ever should allow CSAM content.
And the fact that they didn’t even care and haven’t want to spend money for implementing guardrails or moderation is deeply concerning.
This has imho nothing to do with model censorship, but everything with allowing that kind of content on a platform
Should platforms allow violent AI images? How about "R-Rated" violence like we see in popular movies? Point blank executions, brutal and bloody conflict involving depictions of innocent deaths, torment and suffering... all good? Hollywood says all good, how about you? How far do you take your "unacceptable content" guidance?
Uncontrolled profileration of AI-CSAM makes detection of "genuine" data much harder, prosecution of perpetrators more difficult and specifically in many of the grok cases it harms young victims that were used as templates for the material.
Content is unacceptable if its proliferation causes sufficient harm, and this is arguably the case here.
The "oh its photoshop" defence was an early one, which required the law to change in the uk to be "depictions" of children, so that people who talk about ebephiles don't have an out for creating/distributing illegal content.
As a father there shouldn’t be any CSAM content anywhere.
And think about that it is already proven these models apparently had CSAM content in their training data.
Also what about the nudes of actual people? That is invasion of privacy
I am shocked that we are even discussing this.
I still believe that the EU and aligned countries would rather have America to agree to much tighter speech controls, digital ID, ToS-based speech codes as apparently US Democrats partly or totally agree to. But if they have workable alternatives they will deal with them from a different position.
For some reason you forgot to mention "Like the US did with TikTok".
esp. when America already controls the main outlets through Android Play Store and Apple Store, and yep, they have proven to control them not just happen to host them as a country
arguably America did have valid security concerns with Huawei though, but if those are the rules then you cannot complain later on
They are tasked - and held to account by respective legislative bodies - with implementing the law as written.
Nobody wrote a law saying "Go after Grok". There is however a law in most countries about the creation and dissemination of CSAM material and non-consensual pornography. Some of that law is relatively new (the UK only introduced some of these laws in recent years), but they all predate the current wave of AI investment.
Founders, boards of directors and their internal and external advisors could:
1. Read the law and make sure any tools they build comply
2. When told their tools don't comply take immediate and decisive action to change the tools
3. Work with law enforcement to apply the law as written
Those companies, if they find this too burdensome, have the choice of not operating in that market. By operating in that market, they both implicitly agree to the law, and are required to explicitly abide by it.
They can't then complain that the law is unfair (it's not), that it's being politicised (How? By whom? Show your working), and that this is all impossible in their home market where they are literally offering presents to the personal enrichment of the President on bended knee while he demands that ownership structures of foreign social media companies like TikTok are changed to meet the agenda of himself and his administration.
So, would the EU like more tighter speech controls? Yes, they'd like implementation of the controls on free speech enshrined in legislation created by democratically appointed representatives. The alternative - algorithms that create abusive content, of women and children in particular - are not wanted by the people of the UK, the EU, or most of the rest of the World, laws are written to that effect, and are then enforced by the authorities tasked with that enforcement.
This isn't "anti-democratic", it's literally democracy in action standing up to technocratic feudalism that is an Ayn Randian-wet dream being played out by some morons who got lucky.
The European Court of Human Rights has reminded this point (e.g. 29 Mar 2010, appl. no. 3394/03), and the Court of Justice of the European Union reaches a very similar conclusion (2 Mar 2021, C-746/18): prosecutors are part of the executive hierarchy and can’t be treated as the neutral, independent judicial check some procedures require.
For a local observer, this is made obvious by the fact that the procureur, in France, always follows current political vibes, usually in just a few months delay (extremely fast, when you consider how slowly justice works in the country).
As someone who has lived in (and followed current affairs) in both of these countries, this is a very idealistic and naïve view. There can be a big gap between theory and practice
> There are statutory instruments (in France, constitutional clauses), that determine the independence of these authorities.
> They are tasked - and held to account by respective legislative bodies -
It's worth nothing here that the UK doesn't have separation of powers or a supreme court (in the US sense)
however it's a very mainstream point of view so i respect that he/she has laid it out pretty well, so i upvoted the comment
You would be _amazed_ at the things that people commit to email and similar.
Here's a Facebook one (leaked, not extracted by authorities): https://www.reuters.com/investigates/special-report/meta-ai-...
I assume the raid is hoping to find communications to establish that timeline, maybe internal concerns that were ignored? Also internal metrics that might show they were aware of the problem. External analysts said Grok was generating a CSAM image every minute!!
https://www.washingtonpost.com/technology/2026/02/02/elon-mu...
> https://www.washingtonpost.com/technology/2026/02/02/elon-mu...
That article has no mention of CSAM. As expected, since you can bet the Post has lawyers checking.
What may they find, hypothetically? Who knows, but maybe an internal email saying, for instance, 'Management says keep the nude photo functionality, just hide it behind a feature flag', or maybe 'Great idea to keep a backup of the images, but must cover our tracks', or perhaps 'Elon says no action on Grok nude images, we are officially unaware anything is happening.'
Otoh it is musk.
You're not too far off.
There was a good article in the Washington Post yesterday about many many people inside the company raising alarms about the content and its legal risk, but they were blown off by managers chasing engagement metrics. They even made up a whole new metric.
There was also prompts telling the AI to act angry or sexy or other things just to keep users addicted.
In a further comment you are using a US-focused organization to define an English-language acronym. How does this relate to a French investigation?
As for how it relates, well if the French do find that "Grok's CSAM Plan" file, they'll need to know what that acronym stands for. Right?
Wheras "CSAM isn’t pornography—it’s evidence of criminal exploitation of kids." https://rainn.org/get-informed/get-the-facts-about-sexual-vi...
Even if some kid makes a video of themselves jerking off for their own personal enjoyment, unprompted by anyone else, if someone else gains access to that (eg a technician at a store or an unprincipled guardian) and makes a copy for themselves they're criminally exploiting the kid by doing so.
There was a cartoon picture I remember seeing around 15+ years ago of Bart Simpson performing a sex act. In some jurisdictions (such as Australia), this falls under the legal definition.
You don't think it's worse to molest a child than to not molest a child?
Huge difference here in Europe. CSAM is a much more serious crime. That's why e.g. Interpol runs a global database of CSAM but doesn't bother for mere child porn.
First paragraph on Wikipedia
> Child pornography (CP), also known as child sexual abuse material (CSAM) and by more informal terms such as kiddie porn,[1][2][3] is erotic material that involves or depicts persons under the designated age of majority. The precise characteristics of what constitutes child pornography vary by criminal jurisdiction.[4][5]
Honestly, reading your link got me seriously facepalming. The whole argument seems to be centered around the fact that sexualizing children is disgusting, hence it shouldn't be called porn. While i'd agree that sexualizing kids is disgusting, denying that it's porn on that grounds is feels kinda... Childish? Like someone holding their ears closed and shouting loudly in order not to hear the words the adults around them are saying.
Perhaps similar to how we have a word for murder that is different from "killing" even though murder always involves killing.
"...the encyclopedia anyone can edit." Yes, there are people who wish to redefine CSAM to include child porn - including even that between consenting children committing no crime and no abuse.
Compare and contrast Interpol. https://www.interpol.int/en/Crimes/Crimes-against-children/A...
> The whole argument seems to be centered around the fact that sexualizing children is disgusting, hence it shouldn't be called porn.
I have no idea how anyone could reasonably draw that conclusion from this thread.
> > Honestly, reading your link got me seriously facepalming. The whole argument seems to be centered around the fact that sexualizing children is disgusting, hence it shouldn't be called porn.
Where exactly did you get the impression from I made this observation from this comment thread?
Your interpol link seems to be literally using the same argument again from a very casual glance btw.
> We encourage the use of appropriate terminology to avoid trivializing the sexual abuse and exploitation of children.
> Pornography is a term used for adults engaging in consensual sexual acts distributed (mostly) legally to the general public for their sexual pleasure.
I assumed you expected us to know what you were referring to.
CSAM is the woke word for child pornography, which is the normal.word for pornography involving children. Pornography is defined as material aiming to sexually stimulate, and CSAM is that.
I fear you could be correct.
CSSM?
Well, I'm sure CSAM has negative connotation. Our UK Govt. doesn't keep a database of all CSAM found by the police because its a positive thing.
This step could come before a police raid.
This looks like plain political pressure. No lives were saved, and no crime was prevented by harassing local workers.
Siezing records is usually a major step in an investigation. Its how you get evidence.
Sure it could just be harrasment, but this is also how normal police work looks. France has a reasonable judicial system so absent of other evidence i'm inclined to believe this was legit.
So the question becomes if it was done knowingly or recklessly, hence a police raid for evidence.
See also [0] for a legal discussion in the German context.
[0] https://arxiv.org/html/2601.03788v1
I think one big issue with this statement – "CSAM" lacks a precise legal definition; the precise legal term(s) vary from country to country, with differing definitions. While sexual imagery of real minors is highly illegal everywhere, there's a whole lot of other material – textual stories, drawings, animation, AI-generated images of nonexistent minors – which can be extremely criminal on one side of an international border, de facto legal on the other.
And I'm not actually sure what the legal definition is in France; the relevant article of the French Penal Code 227-23 [0] seems superficially similar to the legal definition of "child pornography" in the United States (post-Ashcroft vs Free Speech Coalition), and so some–but (maybe) not all–of the "CSAM" Grok is accused of generating wouldn't actually fall under it. (But of course, I don't know how French courts interpret it, so maybe what it means in practice is something broader than my reading of the text suggests.)
And I think this is part of the issue – xAI's executives are likely focused on compliance with US law on these topics, less concerned with complying with non-US law, in spite of the fact that CSAM laws in much of the rest of the world are much broader than in the US. That's less of an issue for Anthropic/Google/OpenAI, since their executives don't have the same "anything that's legal" attitude which xAI often has. And, as I said – while that's undoubtedly true in general, I'm unsure to what extent it is actually true for France in particular.
[0] https://www.legifrance.gouv.fr/codes/section_lc/LEGITEXT0000...
The company made and released a tool with seemingly no guard-rails, which was used en masse to generate deepfakes and child pornography.
One the one hand, it seems "obvious" that Grok should somehow be legally required to have guardrails stopping it from producing kiddie porn.
On the other hand, it also seems "obvious" that laws forcing 3D printers to detect and block attempts to print firearms are patently bullshit.
The thing is, I'm not sure how I can reconcile those two seemingly-obvious statements in a principled manner.
If you use a service like Grok, then you use somebody elses computer / things. X is the owner from computer that produced CP. So of course X is at least also a bit liable for producing CP.
I'd guess Elon is responsible for that product decision.
There is no functionality for the users to review and approve "Grok" responses to their tweets.
If you’re hosting content, why shouldn’t you be responsible, because your business model is impossible if you’re held to account for what’s happening on your premises?
Without safe harbor, people might have to jump through the hoops of buying their own domain name, and hosting content themselves, would that be so bad?
Grok makes it trivial to create fake CSAM or other explicit images. Before, if someone spent a week on photoshop to do the same, It won't be Adobe that gets the blame.
Same for 3D printers. Before, anyone could make a gun provided they have the right tools (which is very expensive), now it's being argued that 3D printers are making this more accessible. Although I would argue it's always been easy to make a gun, all you need is a piece of pipe. So I don't entirely buy the moral panic against 3D printers.
Where that threshold lies I don't know. But I think that's the crux if it. Technology is making previously difficult things easier, to the benefit of all humanity. It's just unfortunate that some less-nice things have also been included.
Without such clear legal definitions going after Grok while not going after photoshop is just an act of political pressure.
What you’re implying here is that Musk should be immune from any prosecution simply because he is right wing, which…
This isn’t about AI or CSAM (Have we seen any other AI companies raided by governments for enabling creation of deepfakes, dangerous misinformation, illegal images, or for flagrant industrial-scale copyright infringement?)
Do you have any evidence for that? As far as I can tell, this is false. The only thing I saw was Grok changing photos of adults into them wearing bikinis, which is far less bad.
This is how it works, at least in civil law countries. If the prosecutor has reasonable suspicious that a crime is taking place they send the so-called "judiciary police" to gather evidence. If they find none (or they're inconclusive etc...) the charges are dropped, otherwise they ask the court to go to trial.
On some occasions I take on judiciary police duties for animal welfare. Just last week I participated in a raid. We were not there to arrest anyone, just to gather evidence so the prosecutor could decide whether to press charges and go to trial.
For obvious reasons, decent people are not about to go out and try to general child sexual abuse material to prove a point to you, if that’s what you’re asking for.
https://x.com/i/grok/share/1cd2a181583f473f811c0d58996232ab
The claim that they released a tool with "seemingly no guardrailes" is therefore clearly false. I think what instead has happened here is that some people found a hack to circumvent some of those guardrails via something like a jailbreak.
https://www.bbc.co.uk/news/articles/cvg1mzlryxeo
Also, X seem to disagree with you and admit that CSAM was being generated:
https://arstechnica.com/tech-policy/2026/01/x-blames-users-f...
Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:
https://www.ofcom.org.uk/online-safety/illegal-and-harmful-c...
This is because of government pressure (see Ofcom link).
I’d say you’re making yourself look foolish but you seem happy to defend nonces so I’ll not waste my time.
That post doesn't contain such an admission, it instead talks about forbidden prompting.
> Also the reason you can’t make it generate those images is because they implemented safeguards since that article was written:
That article links to this article: https://x.com/Safety/status/2011573102485127562 - which contradicts your claim that there were no guardrails before. And as I said, I already tried it a while ago, and Grok also refused to create images of naked adults then.
Says who? Musk?
I wouldn't even consider this a reason if it wasn't for the fact that OpenAI and Google, and hell literally every image model out there all have the same "this guy edited this underage girls face into a bikini" problem (this was the most public example I've heard so I'm going with that as my example). People still jailbreak chatgpt, and they've poured how much money into that?
What would happen if Volvo made a special baby-killing model with extra spikes?
No, wait, Volvo is European. They'd impose a 300% tariff and direct anyone who wanted a baby-killing model car to buy one from US manufacturers instead.
The rich can join in the austerity too. No one voted for them. We been conditioned to pick acquiescence or poverty. We were abused into kowtowing to a bunch of pants shitting dementia addled olds educated in religious crack pottery. Their economic and political memes are just that, memes, not immutable physical truth.
In America, as evidenced by the public not in the streets protesting for single payer comprehensive healthcare, we clearly don't want to be on the hook for each other's lives. That's all platitudes and toxic positivity.
Hopes and prayers, bloodletting was good enough for the Founders!
So fuck the poor and the rich. Burn it all down.
It definitely makes it clear what is expected of AI companies. Your users aren't responsible for what they use your model for, you are, so you'd better make sure your model can't ever be used for anything nefarious. If you can't do that without keeping the model closed and verifying everyone's identities... well, that's good for your profits I guess.
The main problem with the image generators is that they are used to harass and smear people (and children...) Those were always illegal to do.
Everybody else has teams building guardrails to mitigate this fundamental existential horror of these models. Musk fired all the safety people and decided to go all in on “adult” content.
Can't because even before GenAI the "oh its generated in photoshop" or "they just look young" excuse was used successfully to allow a lot of people to walk free. the law was tightend in the early 2000s for precisely this reason
If you want. In many countries the law doesn’t. If you don’t like the law your billion dollar company still has to follow it. At least in theory.
Did X do enough to prevent its website being used to distribute illegal content - consensual sexual material of both adults and children?
Now reintroduce AI generation, where X plays a more active role in facilitating the creation of that illegal content.
There's essentially a push to end the remnants of the free speech Internet by making the medium responsible for the speech of its participants. Let's not pretend otherwise.
In the UK, you must take "reasonable" steps to remove illegal content.
This normally means some basic detection (ie fingerprinting which is widely used from a collaborative database) or if a user is consistently uploading said stuff, banning them.
Allowing a service that you run to continue to generate said illegal content, even after you publicly admit that you know its wrong, is not reasonable.
this is not compatible with that line of business - perhaps one of the reasons nothing is done in Europe these days
Yes they could have an uncensored model, but then they would need proper moderation and delete this kind of content instantly or ban users that produce it. Or don’t allow it in the first place.
It doesn’t matter how CSAM is produced, the only thing that matters is that is on the platform.
I am flabbergasted people even defend this
Firstly does the open model explicitly/tacitly allow CSAM generation?
Secondly, when the trainers are made aware of the problem, do they ignore it or attempt to put in place protections?
Thirdly, do they pull in data that is likely to allow that kind of content to be generated?
Fourthly, when they are told that this is happening, do they pull the model?
Fithly, do they charge for access/host the service and allow users to generate said content on their own servers?
But this is about hosting a model with allegedly insufficient safeguards against harassing and child-sexualizing images, isn't it?
Correct comparison would be:
You provide a photo studio with an adjacent art gallery and allow people to shoot CSAM content there and then exhibit their work.
Seems like you'd want to subpoena source code or gmail history or something like that. Not much interesting in an office these days.
The warrant will have detailed what it is they are looking for, French warrants (and legal system!) are quite a bit different than the US but in broad terms operate similarly. It suggests that an enforcement agency believes that there is evidence of a crime at the offices.
As a former IT/operations guy I'd guess they want on-prem servers with things like email and shared storage, stuff that would hold internal discussions about the thing they were interested in, but that is just my guess based on the article saying this is related to the earlier complaint that Grok was generating CSAM on demand.
For a net company in 2026? Fat chance.
[1] This was also something Google did which was change access rights for people in the China office that were not 'vetted' (for some definition of vetted) feeling like they could be an exfiltration risk. Imagine a DGSE agent under cover as an X employee who carefully puts a bunch of stuff on a server in the office (doesn't trigger IT controls) and then lets the prosecutors know its ready and they serve the warrant.
That's aside from the fact that they're a publicly traded company under obligation to keep a gazillion records anyway like in any other jurisdiction.
Which company is publicly traded?
... within 30 days, right? The longest "raid" in history.
Sabu was put under pressure by the FBI, they threatened to place his kids into foster care.
That was legal. Guess what, similar things would be legal in France.
We all forget that money is nice, but nation states have real power. Western liberal democracies just rarely use it.
The same way the president of the USA can order a Drone strike on a Taliban war lord, the president of France could order Musks plane to be escorted to Paris by 3 Fighter jets.
Interesting point. There's a top gangster who can buy anything in the prison commissary; and then there's the warden.
I remember something (probably linked from here), where the essayist was comparing Jack Ma, one of the richest men on earth, and Xi Jinping, a much lower-paid individual.
They indicated that Xi got Ma into a chokehold. I think he "disappeared" Ma for some time. Don't remember exactly how long, but it may have been over a year.
But China is different. Not sure most of western europe will go that far in most cases.
https://www.cbsnews.com/miami/news/venezuela-survey-trump-ma...
But the celebratory pics, which were claimed to be from Venezuela, but were actually from Miami and elsewhere (including, I kid you not, an attempt to pass off Argentine's celebrating a Copa America win) ... that is indicative of "the vast majority of Venezuela"?
If I were smarter, I might start to wonder why, if President Maduro was so unpopular, why would his abductors have to resort to fake footage - which was systematically outed & destroyed by independent journalists within 24 hours? I mean, surely, enough real footage should exist.
Probably better not to have inconvenient non-US-approved independent thoughts like that.
Color me surprised.
https://www.tampafp.com/rand-paul-and-marco-rubio-clash-over...
To me, that's the distinction between political opponents I can respect, and, well, whatever we're seeing now.
You got this information from American media (or their allies')
In reality, Venezuelans flooded the streets in marches demanding the return of their president.
Hypocrisy at its finest.
Claim that you suspect there may be abuse, it will trigger a case for a "worrying situation".
Then it's a procedural lottery:
-> If you get lucky, they will investigate, meet the people, and dismiss the case.
-> If you get unlucky, they will take the baby, and it's only then after a long investigation and a "family assistant" (that will check you every day), that you can recover your baby.
Typically, ex-wife who doesn't like the ex-husband, but it can be a neighbor etc.
One worker explains that they don't really have time to investigate when processing reports: https://www.youtube.com/watch?v=VG9y_-4kGQA and they have to act very fast, and by default, it is safer to remove from family.
The boss of such agency doesn't even take the time to answer to the journalists there...
-> Example of such case (this man is innocent): https://www.lefigaro.fr/faits-divers/var-un-homme-se-mobilis...
but I can't blame them either, it's not easy to make the right calls.
[0] https://www.cbc.ca/news/canada/manitoba/winnipeg-mom-cfs-bac...
[1] https://indianexpress.com/article/india/ariha-family-visit-t...
If you call 119 it gets assessed and potentially forwarded to the right department, which then assesses it again and might (quite likely will) trigger an inspection. The people who turn up have broad powers to seize children from the home in order to protect them from abuse.
In general this works fine. Unfortunately in some circumstances this does give a very low skilled/paid person (the inspector) a lot of power, and a lot of sway with judges. If this person is bad at their job for whatever reason (incompetence/malice) it can cause a lot of problems. It is very hard to prove a person like this wrong when they are covering their arse after making a mistake.
afaik similar systems are present in most western countries, and many of them - like France - are suffering with funding and are likely cutting in the wrong place (audit/rigour) to meet external KPIs. One of the worst ways this manifests is creating 'quick scoring' methods which can end up with misunderstandings (e.g. said a thing they didn't mean) ranking very highly, but subtle evidence of abuse moderate to low.
So while this is a concern, this is not unique to France, this is relatively normal, and the poster is massively exaggerating the simplicity.
There was a huge mess right after metoo when a inspector went against the courts rulings. The court had given the father sole custody in a extremely messy divorce, and the inspector did not agree with the decision. As a result they remove the child from his father, in direct contrast to the courts decision, and put the child through 6 years of isolation and abuse with no access to school. It took investigative journalists a while, but the result of the case getting highlighted in media was that the inspector and supervisor is now fired, with two additoal workers being under investigation for severe misconduct. Four more workers would be under investigation but too long time has passed. The review board should have prevented this, as should the supervisor for the inspector, but those safety net failed in this case in part because of the cultural environment at the time.
This seems guaranteed to occur every year then… since incompetence/malice will happen eventually with thousands upon thousands of cases?
Not at all. This job will go to an "AI" any moment now.
/i
-> Whoops, someone talked with 119 to refer a "worrying" situation, baby removed. It's already two years.
There are some non-profit fighting against such: https://lenfanceaucoeur.org/quest-ce-que-le-placement-abusif...
That being said, it's a very small % obviously not let's not exaggerate but it's quite sneaky.
I'm sure they have much better and quieter ways to do that.
Whereas a raid is #1 choice for max volume...
I mean, if you're a sole caretaker and you've been arrested for a crime, and the evidence looks like you'll go to prison, you're going to have to decide what to do with the care of your kids on your mind. I suppose that would pressure you to become an informant instead of taking a longer prison sentence, but there's pressure to do that anyway, like not wanting to be in prison for a long time.
Elon has ICBMs, but France has warheads.
Also, they are restricted in how they use it, and defendents have rights and due process.
> Sabu was put under pressure by the FBI, they threatened to place his kids into foster care.
Though things like that can happen, which are very serious.
As they say: you can beat the rap but not the ride. If a state wants to make your life incredibly difficult for months or even years they can, the competent ones can even do it while staying (mostly) on the right side of the law.
People are putting a lot of weight on the midterm elections which are more or less the last line of defense besides a so far tepid response by the courts and even then consequence free defiance of court orders is now rampant.
We're really near the point of no return and a lot of people don't seem to notice.
A lot of people are cheering it (some on this very site).
It's a nice sentiment, if true. ICE is out there, right now today, ignoring both individual rights as well as due process.
/s
As we're seeing with the current US President... the government doesn't (have to) care.
In any case, CSAM is the one thing other than Islamist terrorism that will bypass a lot of restrictions on how police are supposed to operate (see e.g. Encrochat, An0m) across virtually all civilized nations. Western nations also will take anything that remotely smells like Russia as a justification.
Well, that's particular to the US. It just shows that checks and balances are not properly implemented there, just previous presidents weren't exploiting it maliciously for their own gains.
That due process only exists to the extent the branches of govt are independent, have co-equal power, and can hold and act upon different views of the situation.
When all branches of govt are corrupted or corrupted to serve the executive, as in autocracies, that due process exists only if the executive likes you, or accepts your bribes. That is why there is such a huge push by right-wing parties to take over the levers of power, so they can keep their power even after they would lose at the ballot box.
This is pretty messed up btw.
Social work for children systems in the USA are very messed up. It is not uncommon for minority families to lose rights to parent their children for very innocuous things that would not happen to a non-oppressed class.
It is just another way for the justice/legal system to pressure families that have not been convicted / penalized under the supervision of a court.
And this isn't the only lever they use.
Every time I read crap like this I just think of Aaron Swartz.
>That was legal. Guess what, similar things would be legal in France.
lawfare is... good now? Between Trump being hit with felony charges for falsifying business records (lawfare is good?) and Lisa Cook getting prosecuted for mortgage fraud (lawfare is bad?), I honestly lost track at this point.
>The same way the president of the USA can order a Drone strike on a Taliban war lord, the president of France could order Musks plane to be escorted to Paris by 3 Fighter jets.
What's even the implication here? That they're going to shoot his plane down? If there's no threat of violence, what does the French government even hope to achieve with this?
Again: the threat is so clear that you rarely have to execute on it.
That's not a credible threat because there's approximately 0% chance France would actually follow through with it. Not even Trump would resort to murder to get rid of his domestic adversaries. As we seen the fed, the best he could muster are some spurious prosecutions. France murdering someone would put them on par with Russia or India.
https://www.faa.gov/air_traffic/publications/atpubs/aim_html...
If captain of the plane disobeyed direct threat like that from a nation, his career is going to be limited. Yeah Elon might throw money at him but that guy is most likely never allowed again to fly near any French territory. I guess whole cabin crew as well .
Being clear for flying anywhere in the world is their job.
Would be quite stupid to loose it like truck driver DUI getting his license revoked.
>If captain of the plane disobeyed direct threat like that from a nation, his career is going to be limited. Yeah Elon might throw money at him but that guy is most likely never allowed again to fly near any French territory. I guess whole cabin crew as well .
Again, what's France trying to do? Refuse entry to France? Why do they need to threaten shooting down his jet for that? Just harassing/pranking him (eg. "haha got you good with that jet lmao")?
Don't give them ideas
Well, when everything is lawfare it logically follows that it won't always be good or always be bad. It seems Al Capone being taken down for tax fraud would similarly be lawfare by these standards, or am I missing something? Perhaps lawfare (sometimes referred to as "prosecuting criminal charges", as far as I can tell, given this context) is just in some cases and unjust in others.
Depends on how much faith you have in the current administration. Russia limits presidents to two 6-year terms, yet Putin is in power since 2000.
Yes, he is in power since 2000 (1999, actually) but 1999-2012 he was Prime Minister. Only then he became President, which would make the end of his second term 2024. So the current one would be his third term (by the magic of changing the constitution and legal quibbles which effectively allow a president to stay in charge for four almost whole terms, AFAIU).
EU, maybe not. France? A nuclear state? Paris is properly sovereign.
> people with strong support of the current government
Also known as leverage.
Let Musk off the hook for a sweetheart trade deal. Trump has a track record of chickening out when others show strength.
That is true. But nukes are not magic. Explain to me how you imagine the series of events where Paris uses their nukes to get the USA to extradite Elon to Paris. Because i’m just not seeing it.
Paris doesn’t need to back down. And it can independently exert effort in a way other European countries can’t. Musk losing Paris means swearing off a meaningful economic and political bloc.
At this point a nuclear power like France has no issue with using covert violence to produce compliance from Musk and he must know it.
These people have proven themselves to be existential threats to French security and France will do whatever they feel is necessary to neutralize that threat.
Musk is free to ignore French rule of law if he wants to risk being involved in an airplane accident that will have rumours and conspiracies swirling around it long after he’s dead and his body is strewn all over the ocean somewhere.
It's just that the West has avoided to do that to each other because they were all essentially allied until recently and because the political implications were deemed too severe.
I don't think however France has anything to win by doing it or has any interest whatsoever and I doubt there's a legal framework the French government can or want to exploit to conduct something like that legally (like calling something an emergency situation or a terrorist group, for example).
Seriously, every powerful state engages in state terrorism from time to time because they can, and the embarrassment of discovery is weighed against the benefit of eliminating a problem. France is no exception : https://en.wikipedia.org/wiki/Sinking_of_the_Rainbow_Warrior
OpenDNS is censored in France... so imagine
Why not? After all, that's in vogue today. Trump is ignoring all the international agreements and rules, so why should others follow them?
The second Donald Trump threatened to invade a nation allied with France is the second anyone who works with Trump became a legitimate military target.
Like a cruel child dismembering a spider one limb at a time France and other nations around the world will meticulously destroy whatever resources people like Musk have and the influence it gives him over their countries.
If Musk displays a sufficient level of resistance to these actions the French will simply assassinate him.
PS Yes, Greenpeace is a bunch of scientifically-illiterate fools who have caused far more damage than they prevented. Doesn't matter because what France did was still clearly against the law.
You don't get to say no to these things.
What happened to due process? Every major firm should have a "dawn raid" policy to comply while preserving rights.
Specific to the Uber case(s), if it were illegal, then why didn't Uber get criminal charges or fines?
At best there's an argument that it was "obstructing justice," but logging people off, encrypting, and deleting local copies isn't necessarily illegal.
They had a sweet deal with Macron. Prosecution became hard to continue once he got involved.
Or they had a weak case. Prosecutors even drop winnable cases because they don't want to lose.
[1]: https://www.lemonde.fr/pixels/article/2022/07/10/uber-files-...
[2]: https://www.radiofrance.fr/franceinter/le-rapport-d-enquete-...
Put this up there with nonsensical phrases like "violent agreement."
;-)
I don't see aggressive compliance defined anywhere. Violent agreement has definitions, but it feels like it's best defined as a consulting buzzword.
They will explain that it was done remotely and whatnot but then the company will be closed in the country. Whether this matters for the mothership is another story.
Elon would love it. So it won't happen.
Elon probably isn’t paying them enough to be the lightning rod for the current cross-Atlantic tension.
This was a common action during the Russian invasion of Ukraine for companies that supported Ukraine and closed their operations in Russia.
Covered here: https://www.theguardian.com/news/2022/jul/10/uber-bosses-tol...
Obviously, the government can just threaten to fine you any amount, close operations or whatever, but your company can just decide to stop operating there, like Google after Russia imposed an absurd fine.
As France discovered the hard way in WW2, you can put all sorts of rock-solid security around the front door only to be surprised when your opponent comes in by window.
What, thinking HQ wouldn't cancel them?
I assume that they have opened a formal investigation and are now going to the office to collect/perloin evidence before it's destroyed.
Most FAANG companies have training specifically for this. I assume X doesn't anymore, because they are cool and edgy, and staff training is for the woke.
“Google intended to subvert the discovery process, and that Chat evidence was ‘lost with the intent to prevent its use in litigation’ and ‘with the intent to deprive another party of the information’s use in the litigation.’”
https://storage.courtlistener.com/recap/gov.uscourts.cand.37...
VW is another case where similar things happens:
https://www.bloomberg.com/news/articles/2017-01-12/vw-offici...
The thing is: Companies don’t got to jail, employees do.
I didn't work anywhere near the level, or anything thats dicey where I needed to have a "oh shit delete everything the Feds are here" plan. Which is a conspiracy to pervert the course of justice (I'm not sure what the common law/legal code name for that is)
The stuff I worked on was legal and in the spirit of the law, along with a paper trail (that I also still have) proving that.
Prosecution must present a valid search warrant for *specific* information. They don't get a carte blanche, so uber way is correct. lock computers and lets the courts to decide.
In the civil code, its quite possibly different. The french have had ~ 3 constitutions in the last 80 years. The also dont have the concept of case history. who knows what the law actually is.
mine had a scene where some bro tried to organise the resistance. A voice over told us that he was arrested for blocking a legal investigation and was liable for being fired due to reputational damage.
X's training might be like you described, but everywhere else that is vaguely beholden to law and order would be opposite.
This would be done in parallel for key sources.
There is a lot of information on physical devices that is helpful, though. Even discovering additional apps and services used on the devices can lead to more discovery via those cloud services, if relevant.
Physical devices have a lot of additional information, though: Files people are actively working on, saved snippets and screenshots of important conversations, and synced data that might be easier to get offline than through legal means against the providers.
In outright criminal cases it's not uncommon for individuals to keep extra information on their laptop, phone, or a USB drive hidden in their office as an insurance policy.
This is yet another good reason to keep your work and personal devices separate, as hard as that can be at times. If there's a lawsuit you don't want your personal laptop and phone to disappear for a while.
In these situations, refusing to provide those keys or passwords is an offense.
The employees who just want to do their job and collect a paycheck aren’t going to prison to protect their employer by refusing to give the password to their laptop.
The teams that do this know how to isolate devices to avoid remote kill switches. If someone did throw a remote kill switch, that’s destruction of evidence and a serious crime by itself. Again, the IT guy isn’t going to risk prison to wipe company secrets.
Yes.
EDIT: It seems from other comments that it may have been Uber I was reading about. The badging system I have personally observed outside the Gigafactories. Apologies for the mixup.
I think as far as Musk is concerned, laws only apply in the "don't get caught" sense.
-> crimes ? what crimes ?
lol, they summoned Elon for a hearing on 420
"Summons for voluntary interviews on April 20, 2026, in Paris have been sent to Mr. Elon Musk and Ms. Linda Yaccarino, in their capacity as de facto and de jure managers of the X platform at the time of the events,
Given his recent "far right" bromance that's probably not a good idea ;)
Most likely, it's Hitler's birthday after all
I'm not at all familiar with French law, and I don't have any sympathy for Elon Musk or X. That said, is this a crime?
Distorted the operation how? By making their chatbot more likely to say stupid conspiracies or something? Is that even against the law?
> complicité de détention d’images de mineurs présentant un caractère pédopornographique
> complicité de diffusion, offre ou mise à disposition en bande organisée d'image de mineurs présentant un caractère pédopornographique
[1]: https://www.tribunal-de-paris.justice.fr/sites/default/files...
Sorry, but that's a major translation error. "pédopornographique" properly translated is child porn, not child sexual abuse material (CSAM). The difference is huge.
> The term “child pornography” is currently used in federal statutes and is defined as any visual depiction of sexually explicit conduct involving a person less than 18 years old. While this phrase still appears in federal law, “child sexual abuse material” is preferred, as it better reflects the abuse that is depicted in the images and videos and the resulting trauma to the child. In fact, in 2016, an international working group, comprising a collection of countries and international organizations working to combat child exploitation, formally recognized “child sexual abuse material” as the preferred term.
Child porn is csam.
[1]: https://www.justice.gov/d9/2023-06/child_sexual_abuse_materi...
GDPR and DMA actually have teeth. They just haven't been shown yet because the usual M.O. for European law violators is first, a free reminder "hey guys, what you're doing is against the law, stop it, or else". Then, if violations continue, maybe two or three rounds follow... but at some point, especially if the violations are openly intentional (and Musk's behavior makes that very very clear), the hammer gets brought down.
Our system is based on the idea that we institute complex regulations, and when they get introduced and stuff goes south, we assume that it's innocent mistakes first.
And in addition to that, there's the geopolitical aspect... basically, hurt Musk to show Trump that, yes, Europe means business and has the means to fight back.
As for the allegations:
> The probe has since expanded to investigate alleged “complicity” in spreading pornographic images of minors, sexually explicit deepfakes, denial of crimes against humanity and manipulation of an automated data processing system as part of an organised group, and other offences, the office said in a statement Tuesday.
The GDPR/DMA stuff just was the opener anyway. CSAM isn't liked by authorities at all, and genocide denial (we're not talking about Palestine here, calm your horses y'all, we're talking about Holocaust denial) is a crime in most European jurisdiction (in addition to doing the right-arm salute and other displays of fascist insignia). We actually learned something out of WW2.
...but then other commenters reminded me there is another thing on the same date, which might have been more the actual troll at Elmo to get him all worked up
I believe people are looking too much into 20 April → 4/20 → 420
No. It's 20 April in the rest of the world: 204.
When they’re both private, fine, whatever.
Or is there any France-specific compliance that must be done in order to operate in that country?
People who have found exploits, just like other generative AI tool.
"Uh guys, little heads up: there are some agents of federal law enforcement raiding the premises, so if you see that. That’s what that is."
I mean, perhaps it's time to completely drop these US-owned, closed-source, algo-driven controversial platforms, and start treating the communication with the public that funds your existence in different terms. The goal should be to reach as many people, of course, but also to ensure that the method and medium of communication is in the interest of the public at large.
... thereby driving up adoption far better than Twitter itself could. Ironic or what.
I think we are getting very close the the EU's own great firewall.
There is currently a sort of identity crisis in the regulation. Big tech companies are breaking the laws left and right. So which is it?
- fine harvesting mechanism? Keep as-is.
- true user protection? Blacklist.
Who decides what communication is in the interest of the public at large? The Trump administration?
I suppose the answer, if we're serious about it, is somewhat more nuanced.
To begin, public administrations should not get to unilaterally define "the public interest" in their communication, nor should private platforms for that matter. Assuming we're still talking about a democracy, the decision-making should be democratically via a combination of law + rights + accountable institutions + public scrutiny, with implementation constraints that maximise reach, accessibility, auditability, and independence from private gatekeepers. The last bit is rather relevant, because the private sector's interests and the citizen's interests are nearly always at odds in any modern society, hence the state's roles as rule-setter (via democratic processes) and arbiter. Happy to get into further detail regarding the actual processes involved, if you're genuinely interested.
That aside - there are two separate problems that often get conflated when we talk about these platforms:
- one is reach: people are on Twitter, LinkedIn, Instagram, so publishing there increases distribution; public institutions should be interested in reaching as many citizens as possible with their comms;
- the other one is dependency: if those become the primary or exclusive channels, the state's relationship with citizens becomes contingent on private moderation, ranking algorithms, account lockouts, paywalls, data extraction, and opaque rule changes. That is entirely and dangerously misaligned with democratic accountability.
A potential middle position could be ti use commercial social platforms as secondary distribution instead of the authoritative channel, which in reality is often the case. However, due to the way societies work and how individuals operate within them, the public won't actually come across the information until it's distributed on the most popular platforms. Which is why some argue that they should be treated as public utilities since dominant communications infrastructure has quasi-public function (rest assured, I won't open that can of worms right now).
Politics is messy in practice, as all balancing acts are - a normal price to pay for any democratic society, I'd say. Mix that with technology, social psychology and philosophies of liberty, rights, and wellbeing, and you have a proper head-scratcher on your hands. We've already done a lot to balance these, for sure, but we're not there yet and it's a dynamic, developing field that presents new challenges.
The charges are made up baloney, the victims don't exist, it's just more IP theft and cash grab.
* They exchanged various emails between 2012 and 2014 about Elon visiting the island
* They made plans for Elon to visit the island
* We don't know if Elon actually followed through on those plans and he denies it
I think it's premature to say he didn't go, and the latest batches of emails directly contradict the claim he wasn't ever invited.
See https://www.cnbc.com/2026/01/30/epstein-files-show-elon-musk...
https://www.theguardian.com/technology/2018/jul/15/elon-musk...
We don't know how many were pedo/rapists, but we know all of them liked to socialize with one and trade favours and spread his influence.
Oops... yeah, in retrospect it was even worse... no... you can and should be judged by the friends you keep and hang-out with... The same ones who seem to be circling the wagons with innocuous statements or attempts to find other scapegoats (DARVO)... hmm, what was that quote again:
"We must all hang together or we will all hang separately"
No abuse of a real minor is needed.
Strange that there was no disagreement before "AI", right? Yet now we have a clutch of new "definitions" all of which dilute and weaken the meaning.
> In Sweden for instance, CSAM is any image of an underage subject (real or realistic digital) designed to evoke a sexual response.
No corroboration found on web. Quite the contrary, in fact:
"Sweden does not have a legislative definition of child sexual abuse material (CSAM)"
https://rm.coe.int/factsheet-sweden-the-protection-of-childr...
> If you take a picture of a 14 year old girl (age of consent is 15) and use Grok to give her bikini, or make her topless, then you are most definately producing and possessing CSAM.
> No abuse of a real minor is needed.
Even the Google "AI" knows better than that. CSAM "is considered a record of a crime, emphasizing that its existence represents the abuse of a child."
Putting a bikini on a photo of a child may be distasteful abuse of a photo, but it is not abuse of a child - in any current law.
Are you from Sweden? Why do you think the definition was clear across the world and not changed "before AI"? Or is it some USDefaultism where Americans assume their definition was universal?
No. I used this interweb thing to fetch that document from Sweden, saving me a 1000-mile walk.
> Why do you think the definition was clear across the world and not changed "before AI"?
I didn't say it was clear. I said there was no disagreement.
And I said that because I saw only agreement. CSAM == child sexual abuse material == a record of child sexual abuse.
So you cant speak Swedish, yet you think you grasped the Swedish law definition?
" I didn't say it was clear. I said there was no disagreement. "
Sorry, there are lots of different judical definitions about CSAM in different countries, each with different edge cases and how to handle them. I very doubt it, there is a disaggrement.
But my guess about your post is, that an American has to learn again there is a world outside of the US with different rules and different languages.
I guess you didn't read the doc. It is in English.
I too doubt there's material disagreement between judicial definitions. The dubious definitions I'm referring to are the non-judicial fabrications behind accusations such as the root of this subthread.
Sources? Sorry , your gut feeling does not matter. Esspecially if you are not a lawyer
Feel free to share any you've seen.
Please don't use the "knowledge" of LLMs as evidence or support for anything. Generative models generate things that have some likelihood of being consistent with their input material, they don't "know" things.
Just last night, I did a Google search related to the cell tower recently constructed next to our local fire house. Above the search results, Gemini stated that the new tower is physically located on the Facebook page of the fire department.
Does this support the idea that "some physical cell towers are located on Facebook pages"? It does not. At best, it supports that the likelihood that the generated text is completely consistent with the model's input is less than 100% and/or that input to the model was factually incorrect.
It has been since at least 2012 here in Sweden. That case went to our highest court and they decided a manga drawing was CSAM (maybe you are hung up on this term though, it is obviously not the same in Swedish).
The holder was not convicted but that is besides the point about the material.
This one?
"Swedish Supreme Court Exonerates Manga Translator Of Porn Charges"
https://bleedingcool.com/comics/swedish-supreme-court-exoner...
It has zero bearing on the "Putting a bikini on a photo of a child ... is not abuse of a child" you're challenging.
> and they decided a manga drawing was CSAM
No they did not. They decided "may be considered pornographic". A far lesser offence than CSAM.
https://www.regeringen.se/contentassets/5f881006d4d346b199ca...
> Även en bild där ett barn t.ex. genom speciella kameraarrangemang framställs på ett sätt som är ägnat att vädja till sexualdriften, utan att det avbildade barnet kan sägas ha deltagit i ett sexuellt beteende vid avbildningen, kan omfattas av bestämmelsen.
Which translated means that the children does not have to be apart of sexual acts and indeed undressing a child using AI could be CSAM.
I say "could" because all laws are open to interpretation in Sweden and it depends on the specific image. But it's safe to say that many images produces by Grok are CSAM by Swedish standards.
Because that is up to the courts to interpret. You cant use your common law experience to interpret the law in other countries.
That interpretation wasn't mine. It came from the Court of Europe doc I linked to. Feel free to let them know its wrong.
I suspect none of us are lawyers with enough legal knowledge of the French law to know the specifics of this case
You should realize that children have committed suicide before because AI deepfakes of themselves have been spread around schools. Just because these images are "fake" doesn't mean they're not abuse, and that there aren't real victims.
Not at all. I am saying just it is not CSAM.
> You should realize that children have committed suicide before because AI deepfakes of themselves have been spread around schools.
Its terrible. And when "AI"s are found spreading deepfakes around schools, do let us know.
When you undress a child with AI, especially publicly on Twitter or privately through DM, that child is abused using the material the AI generated. Therefore CSAM.
I guess you mean pasting a naked body on a photo of a child.
> especially publicly on Twitter or privately through DM, that child is abused using the material the AI generated.
In which country is that?
Here in UK, I've never heard of anyone jailed for doing that. Whereas many are for making actual child sexual abuse material.
Musk's social media platform has recently been subject to intense scrutiny over sexualised images generated and edited on the site using its AI tool Grok.
Here you have a model that is actually creating the CSAM.
It seems more similar to a robot that is told to go kill someone and does so. Sure, someone told the robot to do something, but the creators of the robot really should have to put some safeguards to prevent it.
If the manufacturer advertised that the knife is not just for cooking but also stabbing people, then yes.
if the knife was designed to evade detection, then yes.
Like how I see this is:
* If you can't restrict people from making kiddie porn with Grok, then it stands to reason at the very least, access to Grok needs to be strictly controlled.
* If you can restrict that, why wasn't that done? It can't be completely omitted from this conversation that Grok is, pretty famously, the "unrestrained" AI, which in most respects means it swears more, quotes and uses highly dubious sources of information that are friendly to Musk's personal politics, and occasionally spouts white nationalist rhetoric. So as part of their quest to "unwoke" Grok did they also make it able to generate this shit too?
There's nothing special about Grok in this regard. It wasn't trained to be a MechaHitler, nor to generate CSAM. It's just relatively uncensored[1] compared to the competition, which means it can be easily manipulated to do what the users tell it to, and that is biting Musk in the ass here.
And just to be clear, since apparently people love to jump to conclusions - I'm not excusing what is happening. I'm just pointing out the fact that the only special thing about Grok is that it's both relatively uncensored and easily available to a mainstream audience.
[1] -- see the Uncensored General Intelligence leaderboard where Grok is currently #1: https://huggingface.co/spaces/DontPlanToEnd/UGI-Leaderboard
Well, yes. You can make child pornography with any video-editing software. How is this exoneration?
> How is this exoneration?
I don't know; you tell me where I said it was? I'm just stating a fact that Grok isn't unique here, and if you want to ban Grok because of it then you need to also ban open weight models which can do exactly the same thing.
And the article is talking about a social media site. A different class of software and company.
> if you want to ban Grok
Straw man. Nobody has suggested this.
Source? I’m not seeing that in the French-language press.
There are many things where each is legal/ethical to provide, and where combining them might make business sense, but where we, as a society have decided to not allow combining them.
Meanwhile what I commonly see is people dunking on anything Musk-related because they dislike him, but give a free pass on similar things if it's not related to him.
There is no way this is true, especially if the system is PaaS only. Additionally, the system should have a way to tell if someone is attempting to bypass their safety measures and act accordingly.
Grok brought that thought all the way to "... so let's not even try to prevent it."
The point is to show just how aware X were of the issue, and that they chose to repeatedly do nothing against Grok being used to create CSAM and probably other problematic and illegal imagery.
I can't really doubt they'll find plenty of evidence during discovery, it doesn't have to be physical things. The raid stops office activity immediately, and marks the point in time after which they can be accused of destroying evidence if they erase relevant information to hide internal comms.
The fact that users have found ways to hack around this is not evidence of X committing a crime.
https://github.com/xai-org/grok-prompts/blob/main/grok_4_saf...
X. xAI isn’t being raided. X is. If Instagram bought a girlfriend generator and built it into its app, it would face liability as well.
If every AI system can do this, and every AI system in incapable of preventing it, then I guess every AI system should be banned until they can figure it out.
Every banking app on the planet "is capable" of letting a complete stranger go into your account and transfer all your money to their account. Did we force banks to put restrictions in place to prevent that from happening, or did we throw our arms up and say: oh well the French Government just wants to pick on banks?
On a related note given AI is just a tool and requires someone to tell it to make CSAM I think they will have to prove intent possibly by grabbing chat logs, emails and other internal communications but I know very little about French law or international law.
I think your interpretation would be more along the line of making 1984, Brave New World, Fahrenheit 451 and The Handmaid's Tale a reality.
I will have to check that out, it sounds interesting. It was also pretty obvious how all the social media companies pushed the same narrative through COVID.
I don't like how these social networks and the media try to manipulate things but I don't think giving the government even more power will fix anything. It will probably make it worse. I think even if you had those laws on the books, you would still get manipulation through selective enforcement.
I think the only solution is education and individuals saying no to these platforms' and their algorithmic feeds. I think we are already seeing a growing movement towards people either not using social media or using it way less than they did previously. I know for me personally, I use X but only follow tech people i like and only look at the "following" tab. It is a much better experience than the "for you" tab
And how is that different from TV channels/media en large having laws to abide by? Slippery slope arguments are themselves slippery slopes..
No, i am not saying that it is the same. I am saying that it would start as "We are just going after the tech companies" but if you give the government an inch they will take a mile. They would take that and expand upon the hate speech stuff you are already see around the world as an excuse to arrest whoever they wanted.
I am a free market person, so i think these sites are providing something to the market that people like or they wouldn't be there. If you wanted to rein them in, fine but you have to be careful how you word stuff or it gets pretty scary pretty quickly.
And the free market only works if there is a well-defined market with proper laws that are upheld. Otherwise it's a running competition where Meta/X just shoot every other competitor at the start and drive to the goal with a car. This has been known by Adam Smith already - you can't be a "free market person" while being happy with these giga-corporations trampling on laws left and right.
Its the usual deal from the that crowd:
- when the left does it, it’s just them using their civil liberties
- when the right does it, its illegal manipulation, election interference, fascism and/or Russian disinformation.
It’s the same crowd which keeps using the phrase “our democracy”.
Behaviour like this really makes me wonder who they are, and who they deem not worthy to be included in “their” democracy.
>French authorities opened their investigation after reports from a French lawmaker alleging that biased algorithms on X likely distorted the functioning of an automated data processing system. It expanded after Grok generated posts that allegedly denied the Holocaust, a crime in France, and spread sexually explicit deepfakes, the statement said.
and fraudulent data extraction by an organised group.
Looking at the prompts below some of those image shows that even now, there's almost zero effort at Grok to filter prompts that are blatantly looking to create problematic material. People aren't being sneaky and smart and wordsmithing subtle cues to try to bypass content filtering, they're often saying "create this" bluntly and directly, and Grok is happily obliging.
Sigh. The French raid statement makes no mention of CSAM.
I think that would delve into whether or not the USA would be considered a foreign adversary to France. I was under the impression we were allies since like the 1800s or so despite some little tiffs now and again.
[1] - https://www.congress.gov/bill/118th-congress/house-bill/7521
The closest I can think of is GDPR which has its great aspects and also the cookies law (which is incorrectly interpreted). And some things like private IPs being PIIs which promotes nonsnsical "authorities notifications" that are not used afterwards.
We have consulting companies doing yearly audits on companies to close the books. And yet hacks happen all the time. Without consequences.
There is an ocean between what is announced and lives on paper vs. the reality of the application. If you work in compliance and cubersecurity you see this everyday.
But whatever zombie government France is running can't "ban" X anyway because it would get them one step closer to the guillotine. Like in the UK or Germany it is a tinderbox cruising on a 10-20% approval rating.
If "French prosecutor" want to find a child abuse case they can check the Macron couple Wikipedia pages.
By itself this isn't extraordinary in a democracy.
Paradox of tolerance. (The American right being Exhibit A for why trying to let sunlight disinfect a corpse doesn’t work.)
It is well known Musk amplifies his own speech and the words of those he agrees with on the platform, while banning those he doesn’t like.
https://www.theguardian.com/commentisfree/2024/jan/15/elon-m...
> could you clarify what the difference is between the near right and the far right?
It’s called far-right because it’s further to the right (starting from the centre) than the right. Wikipedia is your friend, it offers plenty of examples and even helpfully lays out the full spectrum in a way even a five year old with a developmental impairment could understand.
https://en.wikipedia.org/wiki/Far-right_politics
I cross-checked Wikipedia's information with another source: https://www.connexionfrance.com/news/french-election-is-it-c...
I don't think the Wikipedia characterization is far off a pretty commonly held sentiment. You are of course, able to disagree and consider them far-left, center, or whatever label you want.
You stated earlier that because Wikipedia called mild immigration reform far-right (which it did not to my reading, so you pointed to National Rally as an example) words don't mean anything. But words do mean things by consensus, and from my reading the consensus is that National Rally is far-right.
Of course, many far-right (and far-left) thinkers consider themselves centrists or mild, so there will be disagreement.
[1]: https://www.bbc.com/news/articles/cxeee385en1o [2]: https://www.politico.eu/article/france-far-right-faces-inter... [3]: https://www.reuters.com/world/europe/le-pens-far-right-waiti... [4]: https://apnews.com/article/france-election-le-pen-national-r... [5]: https://www.nbcnews.com/world/europe/france-raid-far-right-n... [6]: https://www.nytimes.com/2024/07/02/world/europe/france-natio... [7]: https://www.dw.com/en/france-far-right-rally-after-marine-le...
But there's also an obvious semantic fail when 34% of the electorate is "far right". This means (16% - half the moderate percentage) is on the non-far right. It implies that "far" is just meaningless cant.
https://www.politico.eu/europe-poll-of-polls/france/
https://en.wikipedia.org/wiki/Astroturfing
https://www.bbc.com/news/articles/cj38m11218xo
Control the media, you control the information that a significant part of Europeans get. Elections aren't won by 50%, you only need to convince 4 or 5% of the population that the far right is great.
There is a reason Musk paid so much for Twitter. If this stuff had no effect he wouldn't have bought it.
If people want to post disinformation that's fine, but the way that these companies push that information onto users is the problem. There either needs to be accountability for platforms or a ban on behavior driven content feeds.
People lying on the internet is fine. Social media algorithms amplifying the lie because it has high engagement is destroying our society.
By exposing people to a flood of misinformation and politically radicalizing content designed to maximize engagement via emotion (usually anger).
Remember when Elon Musk alleged that he was going to find a trillion dollars (a year) in waste fraud and abuse with DOGE? Did he ever issue a correction on that statement after catastrophically failing to do so? Do you think that kind of messaging might damage the trust in our institutions?
To be 'fair', finding fraud never was the real purpose of DOGE, just some fake argument that enough citizen would find plausible.
How true is this really?
We certainly have data points to show Musk has put his thumb on the scale
I haven't dug into whatever they open sourced about the algorithm to make definitive statements. Regardless, there are many pieces out there where you can learn about the evidence for direct manipulation.
That's not how science and statistics works. Comprehensive evidence and analysis is a search or chat bot away. The legal cases will go into the details as well, by nature of how legal proceedings work
Near right to me is advocating for things like lower taxes or different regulations or a secure border (but without the deportation of millions who are already in the country and abiding by laws). Operating the government for those things while still respecting the law, upholding the constitution, defending civil rights, and avoiding the deeply unethical grifting and corruption the Trump administration has normalized.
Obviously this is very simplified. What are your definitions out of curiosity?
And I would agree with the other reply that Musk is not far right by that definition.
> Avoiding the deeply unethical grifting and corruption the Trump administration has normalized.
Care to give examples of these?
Assume good intent. It helps you see the actually interesting point being made.
My point still stands, "politics change and assessments of politicians change accordingly".
Bill Clinton's crime bill would be considered far right today.
Ronald Regean's amnesty bill would be considered far left today.
It's not just me saying this. Ask anyone who was politically active (as a leftist) in the 90s. I'm not sure what was the equivalent of the Democratic Socialists of America (center-left) at that time, but i'm sure there was an equivalent and Bill Clinton was much more right-wing. That's without mentioning actual left-wing parties (like communists, anarchists, black panthers etc).
He raised taxes, lowered military spending, and pursued universal healthcare. Those are not, and have never been, right-wing stances in the US.
For example, universal health-care is only left-wing if it's a public service. Taking money out of the State's pockets to finance private healthcare and pharmaceutical for-profit corporations is very much a definition of right-wing policy.
I don't think many self-described "right-leaning" people would have called Clinton "right wing" in the 90s.
I 100% see your point and agree with you that he had major policies that I would call right wing today.
As much as it pains me to say this, because i myself consider de Gaulle to be a fascist in many regards, that's far from a majority opinion (disclaimer: i'm an anarchist).
I think de Gaulle was a classic right-wing authoritarian ruler. He had to take some social measures (which some may view as left-wing) because the workers at the end of WWII were very organized and had dozens of thousands of rifles, so such was the price of social peace.
He was right-wing because he was rather conservative, for private property/entrepreneurship and strongly anti-communist. Still, he had strong national planning for the economy, much State support for private industry (Elf, Areva, etc) and strong policing on the streets (see also, Service d'Action Civique for de Gaulle's fascist militias with long ties with historical nazism and secret services).
That being said, de Gaulle to my knowledge was not really known for racist fear-mongering or hate speech. The genocides he took part in (eg. against Algerian people) were very quiet and the official story line was that there was no story. That's in comparison with far-right people who already at the time, and still today, build an image of the ENEMY towards whom all hate and violence is necessary. See also Umberto Eco's Ur-fascism for characteristics of fascist regimes.
In that sense, and it really pains me to write this, but de Gaulle was much less far-right than today's Parti Socialiste, pretending to be left wing despite ruling with right-wing anti-social measures and inciting hatred towards french muslims and binationals.
I think that, for most Western people today, far-right == bad to non-white people, independent of intention (as you demonstrated with your remark about the PS), so de Gaulle's approach to Algeria, whether he's loud about it or not, would qualify him as far-right already.
All this to say, the debate is based on differing definitions of far-right (for example you conflate fascism and far-right and use Eco, while GP and I seem to think it's about extremely authoritarian + capitalist), and has started from an ignorant comment by an idiot who considers Bush (someone who is responsible for the death of around a million Iraqis, the creation of actual torture camps, large-scale surveillance, etc.) not far-right because, I assume, he didn't say anything mean about African-Americans.
But then again people on this very forum will argue Sanders is a literal communist so we circle back to the sub 70iq problem
The "free-speechism" of the past you mention was about speaking truth to power, and this movement still exists on the left today, see for example support for Julian Assange, arrested journalists in France or Turkey, or outright murdered in Palestine.
When Elon Musk took over Twitter and promised free speech, he very soon actually banned accounts he disagreed with, especially leftists. Why free speech may be more and more perceived as right wing is because despite having outright criminal speech with criminal consequences (such as inciting violence against harmless individuals such as Mark Bray), billionaires have weaponized propaganda on a scale never seen before with their ownership of all the major media outlets and social media platforms, arguing it's a matter of free speech.
I think we can and should all agree that child sexual abuse is a much larger and more serious problem than political leanings.
It's ironic as you're commenting about a social media platform, but I think it's frightening what social media has done to us with misinformation, vilification, and echo chambers, to think political leanings are worse than murder, rape, or child sexual abuse.
It's pretty obvious the French are deliberately conflating the two to justify attacking a political dissident.
Used to? Still does. A convincing fake is still only a fake.
> It's pretty obvious the French are deliberately conflating the two to justify attacking a political dissident.
Agreed. But the same conflation in the comments hereabouts is ... puzzling.
I mean, abuse of a photo == abuse of a child? Like, voodoo dolls? Creepy.
However - it has one big problem that is rarely discussed... Normalizing of behaviour, interests and attitudes. It just becomes a thing that Grok can do - for paid accounts, and people think - ok, "no harm, no problem"... Long-term, there will be harm. This has been demonstrated over decades of investigation of CSAM.
That's why all media depicting violence should be banned.
/s
Good luck with that...
and the things about negligence which caused harm to humans (instead of e.g. just financial harm) is that
a) you can't opt out of responsibility, it doesn't matter what you put into your TOS or other contracts
b) executives which are found responsible for the negligent action of a company can be hold _personally_ liable
and independent of what X actually did Musk as highest level executive personal did
1) frequently did statements that imply gross negligence (to be clear that isn't necessary how X acted, which is the actual relevant part)
2) claimed that all major engineering decisions etc. are from him and no one else (because he love bragging about how good of an engineer he is)
This means summoning him for questioning is legally speaking a must have independent of weather you expect him to show up or not. And he probably should take it serious, even if that just means he also could send a different higher level executive from X instead.
(it’ll be interesting to see if this discussion is allowed on HN. Almost every other discussion on this topic has been flagged…)
As mentioned in the article, the UK's ICO and the EC are also investigating.
France is notably keen on raids for this sort of thing, and a lot of things that would be basically a desk investigation in other countries result in a raid in France.
/i
When notified, he immediately:
Have the other AI companies followed suit? They were also allowing users to undress real people, but it seems the media is ignoring that and focussing their ire only on Musk's companies...https://www.bbc.com/news/articles/c98p1r4e6m8o
> Have the other AI companies followed suit? They were also allowing users to undress real people
No they weren’t? There were numerous examples of people feeding the same prompts to different AIs and having their requests refused. Not to mention, X was also publicly distributing that material, something other AI companies were not doing. Which is an entirely different legal liability.
In UK, it is entirely the same. Near zero.
Making/distributing a photo of a non-consenting bikini-wearer is no more illegal when originated by computer in bedroom than done by camera on public beach.
“Sorry I broke the law. Oops for reals tho.”
"Study uncovers presence of CSAM in popular AI training dataset"
https://www.theregister.com/2023/12/20/csam_laion_dataset/.
The onus is on the contractor to make sure any classified information is kept securely. If by raiding an office in France a bunch of US military secrets are found, it would suggest the company is not fit to have those kind of contracts.
https://www.the-independent.com/news/world/americas/crime/us...
I barely use it these days and think adding it to twitter is pretty meh but I view this as regulators exploiting an open goal to attack the infrastructure itself rather than grok e.g. prune-juice drinking sandal wearers in britain (many of whom are now government backbenchers) absolutely despise twitter and want to ban it ever since their team lost control. Similar vibe across the rest of europe.
They have (astutely, if they realise it at least) found one of the last vaguely open/mainstream spaces for dissenting thought and are thus almost definitely plotting to shut it down. Reddit is completely captured. The right is surging dialectically at the moment but it is genuinely reliant on twitter. The centre-left is basically dead so it doesn't get the same value from bluesky / their parts of twitter.
I don't love heavy-handed enforcement on speech issues, but I do really like a heterogenous cultural situation, so I think it's interesting and probably to the overall good to have a country pushing on these matters very hard, just as a matter of keeping a diverse set of global standards, something that adds cultural resilience for humanity.
linkedin is not a replacement for twitter, though. I'm curious if they'll come back post-settlement.
CSAM is banned speech.
Libel must be as assertion that is not true. Photoshopping or AIing someone isn't an assertion of something untrue. It's more the equivalent of saying "What if this is true?" which is perfectly legal
In the US it varies by state but generally requires:
A false statement of fact (not opinion, hyperbole, or pure insinuation without a provably false factual core).
Publication to a third party.
Fault
Harm to reputation
----
In the US it is required that it is written (or in a fixed form). If it's not written (fixed), it's slander, not libel.
Quite.
> That's why some countries, notably Japan, allow the production of hand-drawn material that in the US would be considered CSAM.
Really? By what US definition of CSAM?
https://rainn.org/get-the-facts-about-csam-child-sexual-abus...
"Child sexual abuse material (CSAM) is not “child pornography.” It’s evidence of child sexual abuse—and it’s a crime to create, distribute, or possess. "
Durov was held on suspicion Telegram was willingly failing to moderate its platform and allowed drug trafficking and other illegal activities to take place.
X has allegedly illegally sent data to the US in violation of GDPR and contributed to child porn distribution.
Note that both are directly related to direct violation of data safety law or association with a separate criminal activities, neither is about speech.
CSAM was the lead in the 2024 news headlines in the French prosecution of Telegram also. I didn't follow the case enough to know where they went, or what the judge thought was credible.
From a US mindset, I'd say that generation of communication, including images, would fall under speech. But then we classify it very broadly here. Arranging drug deals on a messaging app definitely falls under the concept of speech in the US as well. Heck, I've been told by FBI agents that they believe assassination markets are legal in the US - protected speech.
Obviously, assassinations themselves, not so much.
I don't believe you. Not sure what you mean by "assassination markets" exactly, but "Solicitation to commit a crime of violence" and "Conspiracy to murder" are definitely crimes.
One of my portfolio companies had information about contributors to these markets — I was told by my FBI contact when I got in touch that their view was the creation of the market, the funding of the market and the descriptions were all legal — they declined to follow up.
Durov wasn't arrested because of things he said or things that were said on his platform, he was arrested because he refused to cooperate in criminal investigations while he allegedly knew they were happening on a platform he manages.
If you own a bar, you know people are dealing drugs in the backroom and you refuse to assist the police, you are guilty of aiding and abetting. Well, it's the same for Durov except he apparently also helped them process the money.
Censorship increases homogeneity, because it reduces the amount of ideas and opinions that are allowed to be expressed. The only resilience that comes from restricting people's speech is resilience of the people in power.
Humanity itself is trending more toward monoculture socially; I like a lot of things (and hate some) about the cultural trend. But what I like isn't very important, because I might be totally wrong in my likes; if only my likes dominated, the world would be a much less resilient place -- vulnerable to the weaknesses of whatever it is I like.
So, again, I propose for the race as a whole, broad cultural diversity is really critical, and worth protecting. Even if we really hate some of the forms it takes.
There's someone who was being held responsible for what was in encrypted chats.
Then there's someone who published depictions of sexual abuse and minors.
Worlds apart.
They can read all messages, so they don't have an excuse for not helping in a criminal case. Their platform had a reputation of being safe for crime, which is because they just... ignored the police. Until they got arrested for that. They still turn a blind eye but not to the police.
Anyway cut to the chase, I just checked out Mathew Greens post on the subject, he is on my list of default "trust what he says about cryptography" along with some others like djb, nadia henninger etc
Embarrased to say I did not realise, I should of known! 10+ years ago I used to lurk the IRC dev chans of every relevant cypherpunk project, including of text secure and otr-chat when I saw signal being made and before that was witnessing chats with devs and ian goldberg and stuff, I just assumed Telegram was multiparty OTR,
OOPS!
Long winded post because that is embarrassing (as someone who studied cryptography undergrad in 2009 mathematics, 2010 did postgrad wargames and computer security course and worse - whose word once about 2012-2013 was taken on these matters by activists, journalists, researchers with pretty knarly threat model - like for instance - some guardian stories and former researcher into torture - i'm also the person that wrote the bits of 'how to hold a crypto party' that made it a protocol without an organisation and made clear the threat model was anyone could be there, oops oops oops
Yes thanks for letting me know I hang my head in shame for missing that one or some how believing that one without much investigation, thankfully it was just my own personal use to contact like friend in the states where they aren't already on signal etc.
EVERYONE: DON'T TRUST TELEGRAM AS END TO END ENCRYPTED CHAT https://blog.cryptographyengineering.com/2024/08/25/telegram...
Anyway as they say "use it or lose it" yeah my assumptions here no longer valid or considered to have educated opinion if I got something that basic wrong.
“Probably just Talulah and me. What day/night will be the wildest party on your island?” Musk replied, in an apparent reference to his former wife Talulah Riley.
https://www.theguardian.com/technology/2026/jan/30/elon-musk...
I think there's just as much evidence Clinton did as Musk. Gates on the other hand.
Has the latest release changed that narrative?
Elon didn't ask to go, he was invited multiple times
Why isn't that a major red flag exactly?