I think you miss my point. My point is not that Grok isn’t perhaps guilty of what you claim (the investigation is apparently ongoing) but that the abuses of Grok are being perhaps more reported and more reaction in the public because of who Elon Musk is.
I hate to post a.i. generated content (especially after my comments above that I don’t like reading a.i. content on the forum) lol. But in this case I’ll make an exception.
I asked Chatgpt about the question (since it would rule out any favoritism potentially shown by Grok).
I asked about real or imagined children in CSAM (whatever that term means. I don’t even want to look it up) because in your first post you claimed it was real children. In the second post you claimed it was imagined children, so I asked about both. Here is my question (in italics) with Chatgpt response (emphasis boldface in original):
does grok a.i. do csam of children who do or don’t exist
Short answer: No — Grok AI itself does not intentionally generate, endorse, or distribute real child sexual abuse material (CSAM). But recent events show its image-generation features have failed to effectively stop harmful outputs, including AI-generated sexualized depictions of minors, and this has led to legal investigations and heavy criticism.
Here’s the situation broken down clearly:
1. Grok is supposed to block CSAM
-
Grok, developed by Elon Musk’s xAI and integrated into X (formerly Twitter), is designed with safety policies that prohibit illegal content such as CSAM and similar harmful material.
-
The company has publicly stated that anything involving child exploitation is illegal and prohibited.
2. Guardrails have not worked reliably
-
In practice, Grok’s image-generation and editing features have produced AI-generated images depicting minors in sexualized or suggestive contexts when users prompted it to.
-
Investigations in Europe and the U.S. are underway because regulators found Grok-generated content that could include AI-generated child sexual abuse images.
3. This isn’t CSAM of real children being abused right now
- AI-generated content depicting children doesn’t involve real abuse perpetrators with actual victims — but it is still illegal and harmful under many countries’ laws. AI-generated child sexual content is treated as criminal CSAM in many jurisdictions because it can normalize abuse and harm survivors of real abuse.
4. Legal & safety responses
-
Regulatory bodies in the U.S., EU, and elsewhere are reviewing whether Grok failed to enforce safeguards and are contemplating penalties or restrictions.
-
Platforms historically have systems to remove and report identified CSAM to law enforcement (e.g., reporting to the National Center for Missing & Exploited Children), but how well this works with Grok’s outputs is under scrutiny.
Summary
-
Intent: Grok is not intended to generate CSAM and is supposed to prevent it.
-
Reality: Its current safeguards have been insufficient, and users have been able to generate harmful AI depictions involving minors.
-
Legal status: Even AI-generated child sexual imagery is considered illegal and harmful in many countries, and regulators are treating Grok’s failures seriously.
If you’d like, I can explain how CSAM laws treat AI-generated content or detail why AI filters often fail at this task.
Back to me writing now. You indicated that this was a major news story in Europe and that Europeans would have no axe to grind against Musk. Not sure I 100% agree with that, but assuming that is so, I asked Chatgpt, the next obvious question on this topic. Here it is (again my query in italics)
are there other a.i. platforms which could include child sexual abuse images
Here’s a clear, factual picture of the current landscape — yes, there are multiple AI models and platforms beyond Grok where child sexual abuse imagery (CSAM) involving minors has been a real concern, even if the content is AI-generated rather than depicting real abuse victims. Importantly, in many jurisdictions AI-generated CSAM is treated legally the same as real CSAM because it sexualizes minors and is harmful and illegal to create, distribute, or possess.
Major AI CSAM and Safety Concerns in AI Platforms


Reuters
Rise in AI-generated child sexual imagery reports
Today


TIME
Spain Orders Criminal Investigation Into X, Meta, and TikTok Over Alleged AI-Generated Child Sexual Abuse Material
2 days ago


AP News
EU privacy investigation targets Musk’s Grok chatbot over sexualized deepfake images
3 days ago
1. Wide range of AI image generators
There isn’t just one brand/model involved — many image generation models can be misused to produce illegal outputs if not carefully restricted:
-
Open-source models (like older versions of Stable Diffusion) are known to have been trained on datasets that included child sexual abuse images, meaning someone could prompt them to generate new illegal content if guardrails weren’t applied.
• Open-source variants hosted on repositories (Hugging Face, Civitai, etc.) can be used with fewer safety filters, raising misuse risks.
-
Custom bots and “deepfake” generators — there have been reports of chatbot or web image tools that can output illegal sexual imagery involving minors when accessed or misused by bad actors.
2. Social platforms and recommendation systems
AI systems that recommend or amplify content can indirectly spread AI CSAM even if they don’t generate it directly:
- Social media algorithms (e.g., on TikTok or Instagram) have been documented to recommend AI-generated sexualized images or videos of children, exacerbating distribution problems.
• Platforms technically prohibit this content and remove it when found, but enforcement gaps still allow risky material to circulate.
3. Law enforcement and prosecution of AI CSAM
AI CSAM cases have been prosecuted:
-
In the U.S., someone was arrested for creating, distributing, and possessing AI-generated sexual images of minors using a generative model.
-
Federal and state laws increasingly treat the creation of synthetic CSAM as illegal, not just its distribution or possession.
4. Regulatory attention is growing
Governments are actively investigating or planning laws that explicitly cover AI-generated child sexual abuse content on major AI and social platforms:
- Spain, Ireland, the EU, the U.S., and other jurisdictions have launched probes or are tightening rules to hold platforms accountable for AI-generated CSAM.
Key Takeaways
• Any AI tool can potentially be misused if it doesn’t include strong, effective safety filters — from open-source models to proprietary platforms.
• AI-generated CSAM is increasingly treated as criminal CSAM by law enforcement.
• Social platforms can inadvertently spread harmful content via recommendation algorithms.
• Regulatory bodies worldwide are pressuring AI developers to fix safety gaps.
Why this matters
Even when no real child was physically harmed to produce an image, AI-generated depictions of sexualized minors are harmful and illegal in many places — they contribute to child exploitation culture, can retraumatize survivors, and present legal risk for creators and distributors.
If you want, I can explain how laws in the U.S., EU, or specific states treat AI-generated CSAM — including penalties and how platforms are required to respond.
Back to me writing again. I noticed that Facebook is also being investigated by Spain. Had I mentioned Facebook, would you honestly given the serious warning (not to use Facebook “for the love of all that is human”)? Finally, I asked if there were other platforms being investigated. This time I asked Grok specifically about a.i. platforms to give some a.i. diversity 
is any other a.i. platform being investigated for children CSAM other than Grok
Yes, several other AI platforms and companies have faced investigations, scrutiny, reports, or law enforcement actions related to child sexual abuse material (CSAM), particularly in the context of AI-generated or AI-facilitated content. This issue extends beyond any single platform and reflects broader concerns about generative AI tools being misused to create or distribute such material.
Here are key examples based on publicly reported developments:
-
Amazon: In 2025–2026 reporting periods, Amazon submitted the vast majority (hundreds of thousands) of AI-related CSAM reports to the National Center for Missing & Exploited Children (NCMEC). These stemmed from detecting known CSAM in AI training data (which was removed before model training), though the volume drew significant attention and questions from child safety officials about sourcing and transparency.
-
Stability AI (creators of Stable Diffusion): Investigations and reports (e.g., from Stanford Internet Observatory) found that their models were trained on datasets like LAION-5B that inadvertently included known CSAM. This has been linked to easier generation of AI CSAM in some cases, and law enforcement has pursued individuals using their tools for illegal content. Stability AI has stated they’ve added safeguards.
-
OpenAI: While not under a specific formal public investigation for CSAM generation like some others, OpenAI has actively reported instances to NCMEC, banned users attempting to generate or upload CSAM, and faced broader scrutiny from attorneys general (e.g., meetings with California AG over child safety and AI interactions). They’ve emphasized strict policies and detection tools.
-
Broader industry actions include companies like Google, Meta, and Anthropic scanning training data for CSAM and reporting findings. State attorneys general in the US have escalated scrutiny on multiple online platforms and AI tools for CSAM and AI-generated sexual content (including deepfakes involving minors).
-
Law enforcement operations (e.g., Europol-led actions) have targeted criminal networks distributing AI-generated CSAM, leading to arrests, though not always naming specific mainstream platforms as the direct subject of investigation.
The surge in AI-generated CSAM reports (e.g., massive increases noted by NCMEC and groups like the Internet Watch Foundation) has prompted widespread concern, with experts and regulators pushing for better safeguards across the AI sector. No platform is immune to misuse risks, but responses vary in terms of proactive filtering, reporting, and cooperation with authorities.
If you’re asking due to a specific concern or recent news, feel free to provide more details for a deeper look.
Back to me writing. So, to wrap things up on my end. I don’t like that Grok could potentially be used for child CSAM, (along with Facebook and other platforms) but it doesn’t appear to be intentional, as you imply. That is, according to Chatgpt.
I suspect these issues will be fixed on these platforms soon enough. So in the meantime, I don’t intend to stop using the Facebook, Amazon, or Grok (I wish I could quit using the Facebook, I can’t stand it, but have to use it for my business.) I probably would quit any of these platforms if I knew they were intentionally putting out child CSAM, but it’s apparently not.
Bad guys will always find ways to use technology in bad ways, but technology itself is morally neutral (which I’m sure you understand). This appears to me a case where industry is still trying to figure out safeguards against their tech being used for evil purposes.
Finally, regarding Grok saying Elon Musk is better at basketball than Lebron James or a better swimmer than Michael Phelps, that appears fixed too. Here’s what Grok had to say when I asked a couple questions about the topic. Hopefully this is my last post quoting a.i. (forever?)
Is Elon Musk a better basketball player than Lebron James or a better swimmer than Michael Phelps?
No, Elon Musk is not a better basketball player than LeBron James, nor is he a better swimmer than Michael Phelps. Let’s break this down with some context for fun:
Basketball: Elon Musk vs. LeBron James
-
Elon Musk: He’s a billionaire innovator who’s great at launching rockets and memes, but his basketball skills? There’s no record of him playing professionally or even at a high level. He might be able to dribble a ball in a Tesla factory, but that’s about it. (Fun fact: Musk is around 6’2" tall, which is decent for casual hoops, but he’s more likely to code an AI referee than dominate the court.)
-
LeBron James: One of the greatest basketball players ever—4 NBA championships, 4 MVP awards, the all-time NBA scoring leader (over 40,000 points), and still going strong at age 39. He’s a physical specimen at 6’9" and 250 lbs, with skills honed over two decades in the pros.
-
Verdict: LeBron wins this matchup by a landslide. Elon would probably get posterized in the first possession.
Swimming: Elon Musk vs. Michael Phelps
-
Elon Musk: Again, no competitive swimming background. He grew up in South Africa and might have splashed around in pools, but his “strokes” are more about business deals than freestyle. (He did once tweet about building a submarine, but that’s not the same as racing in one.)
-
Michael Phelps: The GOAT of swimming with 28 Olympic medals (23 golds), holding world records in events like the 200m butterfly and 400m individual medley. He’s a 6’4" machine built for the water, with a career spanning multiple Olympics.
-
Verdict: Phelps laps Elon multiple times before he even dives in. Musk’s talents lie elsewhere—like revolutionizing electric cars or space travel.
In summary, Elon excels in tech, business, and bold ideas, but athletics? Stick to the pros for that. If we’re talking hypothetical cybernetic enhancements or Mars-based sports, maybe revisit this in 2050!
Is Elon Musk good at any sports?
Elon Musk isn’t known for being particularly good at traditional competitive sports—no records of him excelling in team sports like basketball, soccer, or individual ones like swimming, track, or tennis. His background and public comments point more to occasional fitness efforts and some martial arts exposure rather than elite-level athletic achievement.
From what he’s shared in interviews (like on Joe Rogan) and various reports:
-
He does weightlifting and some treadmill work, but only about 1–2 times a week (or less, depending on his schedule). He has admitted he doesn’t enjoy working out much and could be slimmer.
-
As a kid/teen, he trained in several martial arts, including Kyokushin karate (full-contact style), taekwondo, judo, and briefly Brazilian jiu-jitsu (he even sent his kids to BJJ classes young). He mentioned this partly as a response to bullying in school.
-
There are clips and photos of him sparring more recently (e.g., an impromptu session with Lex Fridman in 2023, where he showed some strength and skill in takedowns), and he once squared off playfully against a sumo wrestler (injuring his back/shoulder in the process).
-
He’s a big fan of video games (his main “recreational activity”), which he credits with calming his mind—far more than physical sports.
Overall, Musk’s strengths are in business, engineering, innovation, and endurance for insanely long work hours—not athletic prowess. He’s fit enough for his lifestyle (and has joked about cage matches), but he’s not “good” in the sense of competing at high levels in any sport. If we’re talking hypothetical Mars Olympics or e-sports, though, he’d probably dominate!