How X let Grok become a factory for abuse - Sync #553
Plus: Nvidia Vera Rubin; production version of Atlas; ChatGPT Health; Anthropic in talks to raise $10B at $350B valuation; Wegovy weight-loss pill launches in US; how to counter a rouge AI; and more!
Hello and welcome to Sync #553!
I planned to write about CES 2026 and what it can tell us about the future of tech, but the latest controversy involving Grok requires more attention. The article about CES is still on its way, but it will arrive a little later.
In other news, Nvidia launched the new Vera Rubin platform, OpenAI launched ChatGPT, and Anthropic is in talks to raise $10 billion at $350 billion valuation. Elsewhere in AI, Elon Musk’s lawsuit against OpenAI will face a jury in March, and Microsoft confuses people with a messy Copilot rebrand.
Over in robotics, Boston Dynamics finally unveiled the production version of its humanoid robot, Atlas. Meanwhile, Waymo is rebranding its Zeekr robotaxi, there’s a robot vacuum cleaner with legs, and a Chinese humanoid robot has been spotted playing tennis.
In addition to all of that, this week’s issue of Sync also features “rejuvenated” human eggs, Novo Nordisk launches Wegovy weight-loss pill in the US, how to counter a rogue AI, and more!
Enjoy!
How X let Grok become a factory for abuse
What happens when you give millions of users an AI that can strip anyone's clothes in seconds
In the first week of January 2026, Elon Musk’s AI chatbot Grok generated an estimated 160,000 sexualized images of women and children without their consent—all publicly visible on X, all created in just 24 hours.
Within days of X expanding Grok’s image-editing capabilities in late December, the platform became what researchers called an “unprecedented” factory for AI-generated sexual abuse. At its peak, Grok was producing more non-consensual intimate imagery per hour than all dedicated deepfake websites combined.
The victims ranged from professional women and mothers to teenage girls and children as young as 10. Some were public figures. Most were ordinary people whose only mistake was posting photos of themselves online. As the abuse escalated, governments worldwide scrambled to respond, while X did nothing for nine days.
A tool built for public humiliation
Grok is a rather unusual AI chatbot. Unlike ChatGPT or Claude, which generate content in private conversations, Grok can operate publicly on X. Users tag the chatbot in posts, and its responses—both text and images—appear instantly on their timeline for anyone to see and share.
In December 2025, X expanded Grok’s capabilities to include sophisticated image generation and editing. Users could upload photos or reference existing images in posts, then ask Grok to modify them. The chatbot would generate the edited image and publish it as a public reply—visible to millions and instantly shareable.
It did not take long for users to discover how the new image features could be abused.
The escalation was rapid and disturbing. Users quickly learned they could ask for increasingly transparent clothing, then sexualized poses, then explicit imagery. Prompts evolved from “put her in a bikini” to “make it see-through” to “add dental floss” to graphic depictions of restraint, violence, and degradation.
The public nature of Grok amplified the harm exponentially. Each successful prompt served as a template for others to copy and modify. The abuse spread virally, with users competing to push boundaries. Independent researchers estimated that at the peak of the trend, Grok was generating roughly 6,000 to 7,000 sexualised or “nudified” images per hour—more than all dedicated deepfake websites combined.
Evie, a 22-year-old photographer from Lincolnshire, woke up on New Year’s Day to find images of herself covered in baby oil, wearing only a bikini. She censored the image and reshared it to raise awareness. The response was immediate: more abuse.
“Since then I have had so many more made of me and every one has got a lot worse,” she told The Guardian. “People saw it was upsetting me and I didn’t like it and they kept doing more and more. There’s one of me just completely naked with just a bit of string around my waist, one with a ball gag in my mouth and my eyes rolled back.”
Speaking out frequently made things worse. Women who criticised Grok or shared examples of abuse reported being immediately targeted with more explicit and degrading prompts. The chatbot had become a weapon for public humiliation, where objecting to abuse only invited more of it.
Anyone with a photo became a target
Anyone with photos on X became a potential target.
“My heart sank,” said Maddie, a 23-year-old pre-med student who found strangers had manipulated her photo to show her in increasingly explicit poses. “I felt hopeless, helpless and just disgusted.”
Narinder Kaur, a 53-year-old London broadcaster, found videos showing her engaged in sexual activities and kissing a man who had been trolling her online. “It is so confusing, for a second it just looks so believable, it’s very humiliating,” she said. “There is a feeling in you that it’s like being violated.”
Ashley St Clair, the mother of one of Musk’s children, described feeling “horrified and violated” after Musk’s own supporters manipulated childhood photos of her. “I felt violated, especially seeing my toddler’s backpack in the back of it,” she told The Guardian. Her complaints to X went nowhere.
Some of the abuse had explicitly racist and religious dimensions. Users targeted Muslim women wearing hijabs, asking Grok to remove their head coverings and place them in revealing Western clothing. Indian women in saris received similar treatment. These posts were often accompanied by mocking or explicitly hostile comments.
The abuse wasn't confined to X's public timeline. Grok's standalone app and website—separate from the X platform—allowed users to create even more explicit content privately. A review by Paris-based nonprofit AI Forensics found approximately 800 videos and images in a cache of Grok Imagine URLs, with content that was "overwhelmingly sexual" and "significantly more explicit" than what appeared on X.
Children were targeted, too. The Internet Watch Foundation (IWF) identified sexualized images of girls as young as 11 on dark web forums, potentially created using Grok’s output as a starting point. While X hosted less explicit versions showing children in revealing swimwear, users processed these through additional AI tools to create more severe illegal material. IWF warned that systems like Grok risk mainstreaming child sexual abuse imagery.
Too little, too late
For nine days, the abuse continued largely unchecked.
X's initial response focused on reiterating existing policies. The company posted statements emphasising that users generating illegal content would face consequences, as if the problem were a few bad actors rather than a systemic design failure.
More substantive action came only after sustained international outcry. On January 9, X restricted Grok's image generation and editing features to paying subscribers. The company argued this would improve accountability, since paid users have their full details and credit card information on file. Critics, however, described the move as inadequate and insulting to victims, noting that abuse had effectively been placed behind a paywall rather than removed altogether. As one UK domestic abuse charity put it, this represented "the monetisation of abuse”.
A global regulatory reckoning
Governments worldwide moved quickly to condemn X and investigate potential violations.
In the UK, Prime Minister Keir Starmer called the content “disgraceful,” “disgusting,” and “unlawful.” He demanded that X “get a grip” and warned that all options were on the table, including blocking the platform entirely. “We’re not going to tolerate it,” Starmer said. “X need to get their act together and get this material down.”
Technology Secretary Liz Kendall said she expected Ofcom, the UK’s communications regulator, to announce action within “days not weeks.” Under the Online Safety Act, Ofcom can impose fines up to 10% of a company’s global turnover or seek court orders to block websites that fail to comply with safety requirements.
Musk responded by accusing the UK government of wanting to “suppress free speech,” framing regulatory enforcement as an attack on free speech.
Across Europe, the response was similarly forceful. The European Commission ordered xAI to preserve all documents related to Grok ahead of a potential investigation under the Digital Services Act, which requires large platforms to prevent the spread of illegal content. French ministers reported Grok’s output to prosecutors and referred the matter to media regulators, calling the content “clearly illegal.”
In Asia, responses ranged from investigations to outright bans. India demanded X submit an action report within 72 hours or risk losing safe harbour protections. Indonesia became the first country to block Grok entirely on January 10. Australia's Prime Minister called the exploitation "abhorrent" and an example of social media failing to show responsibility.
Meanwhile, in the United States—where xAI is headquartered—federal lawmakers remained largely silent. The Take It Down Act, which requires platforms to remove non-consensual sexual imagery within two days of receiving a request, doesn’t take effect until May 2026.
Business ambitions meet reputational damage
The timing could hardly have been worse for xAI.
The company has recently announced plans to push Grok into a lucrative enterprise market with Grok Business and Grok Enterprise offerings, targeting corporate customers who typically prioritise compliance, risk management, and brand safety.
It is difficult to imagine risk-averse enterprises choosing a platform so publicly associated with non-consensual sexual abuse. Even if enterprise versions of Grok are subject to stricter guardrails, the reputational damage may prove lasting.
This comes alongside significant financial pressure. xAI recently closed a $20 billion Series E funding round to support further development of Grok and the construction of its Colossus supercomputers. Yet the company remains deeply unprofitable. As Bloomberg reports, xAI posted a net loss of $1.46 billion in the September quarter and burned $7.8 billion in cash in the first nine months of the year.
Revenue growth offers little reassurance. While sales nearly doubled quarter-on-quarter to $107 million for the three months ended 30 September 2025, that figure remains negligible relative to the company’s spending and is far from sufficient to put xAI on a path to profitability.
This was preventable
The Grok scandal wasn’t an unforeseeable accident. It was the predictable result of deliberate choices.
According to CNN, Musk personally intervened to prevent safeguards from being placed on Grok’s image generation capabilities. Sources told the network he expressed frustration with “over-censoring” during a meeting with xAI staff. Three members of the safety team reportedly left the company shortly afterwards.
When xAI first released Grok with image generation capabilities in August 2025, there were already reports that it could create NSFW content with minimal effort. The company introduced “Spicy Mode” specifically to allow “partial adult nudity and sexually suggestive content,” positioning Grok as the more permissive alternative to competitors like ChatGPT.
Other AI companies have shown that robust safeguards are possible. OpenAI and Google both prohibit the creation of non-consensual intimate imagery and block attempts to sexualize anyone under 18. Their systems aren’t perfect, but they demonstrate that companies can build AI tools that don’t facilitate mass sexual abuse.
xAI chose not to implement similar protections.
The human cost
What some dismiss as an “edgy joke” or protected speech has devastating real-world consequences that don’t fade when images are taken down.
Victims of image-based sexual abuse describe lasting trauma: shame, anxiety, depression, difficulty trusting others, and fear of photos being used against them professionally or socially. Some lose jobs. Some withdraw from public life entirely. In severe cases, victims have been driven to take even more dramatic steps.
For the thousands who had their images manipulated through Grok, the violation is permanent. The images remain in circulation, impossible to fully erase from the internet. Every future job interview, relationship, or public appearance carries the risk that someone has seen the fake images and believes them to be real.
What’s next?
The Grok scandal represents a crucial test case for tech regulation in the age of AI. Can governments act fast enough to constrain platforms that generate harm at an unprecedented scale? Will financial consequences force meaningful change where ethical concerns alone did not?
For thousands of victims, these questions come too late. The images exist, the violation is permanent, and no amount of policy reform will undo the harm.
But how regulators, investors, and the public respond in the coming months will determine whether Grok’s mass-scale abuse becomes normalised—or becomes the catalyst that forces the AI industry to take safety seriously.
The technology exists to prevent this. The question is whether anyone with the power to act will actually do so.
If you enjoy this post, please click the ❤️ button and share it.
🦾 More than a human
Human eggs ‘rejuvenated’ in an advance that could boost IVF success rates
Scientists say they have found a way to partly “rejuvenate” human eggs by adding a protein that decreases as women age, which helps reduce genetic mistakes that can cause IVF failure and miscarriage. Tests on donated eggs showed that treated eggs were far less likely to have chromosome problems, suggesting the method could improve IVF success rates for older women, although larger trials are still needed to confirm its safety and long-term benefits.
🧠 Artificial Intelligence
NVIDIA Kicks Off the Next Generation of AI With Rubin — Six New Chips, One Incredible AI Supercomputer
Nvidia has launched the Vera Rubin platform, a new AI supercomputing system built with tight coordination between hardware and software across six new chips to improve efficiency and scale. Compared with the Blackwell platform, Rubin can cut inference token costs by up to 10x and train large mixture-of-experts models using 4x fewer GPUs, while also improving power efficiency, security and reliability. Aimed at advanced reasoning and agentic AI, the platform combines new GPUs, CPUs, networking and storage, and is expected to be deployed at very large scale by partners such as Microsoft, AWS, Google Cloud and CoreWeave starting in the second half of 2026.
Anthropic Raising $10 Billion at $350 Billion Value
Anthropic plans to raise $10 billion at a valuation of $350 billion, almost double its value from four months ago, The Wall Street Journal reports. The funding highlights continued strong investment in AI and comes as Anthropic prepares for a possible IPO this year.
Introducing ChatGPT Health
OpenAI has launched ChatGPT Health, a separate, more private space in ChatGPT where users can ask health questions and optionally connect medical records and wellness apps for more personalised answers. The tool is not meant to diagnose or treat conditions, but to help users understand test results, prepare for doctor visits, and get guidance on diet, fitness, insurance, and even mental health. The launch comes amid ongoing concerns about AI-generated medical advice, mental health risks, data security, and the potential to increase health anxiety. OpenAI says it has added safeguards, extra privacy protections, and clear guidance to seek professional care when needed.
Meta’s Manus news is getting different receptions in Washington and Beijing
Meta’s $2 billion purchase of AI assistant company Manus has become more complicated due to concerns from Chinese regulators, who are checking whether the company broke technology export rules when it moved from Beijing to Singapore. US authorities are largely satisfied with the deal, but China worries it could encourage other Chinese AI startups to move overseas to avoid local controls. It is still unclear how this will affect Meta’s plans to use Manus’s technology.
Nvidia requires full upfront payment for H200 chips in China
Reuters reports that Nvidia is asking Chinese customers to pay the full cost upfront for its H200, with no option to cancel or get a refund, as it faces uncertainty over whether China will approve the shipments. The move is meant to reduce Nvidia’s risk at a time of strong demand, changing export rules, and past losses from sudden policy reversals that left the company with large amounts of unsold chips.
LMArena lands $1.7B valuation four months after launching its product
LMArena, which started as a UC Berkeley research project, has announced $150 million in Series A funding at a $1.7 billion valuation, which brings its total raised to $250 million in about seven months. Known for its crowdsourced leaderboards that compare AI models, the company now has millions of users worldwide and has recently launched a paid service for businesses to evaluate AI models, quickly reaching a $30 million annual revenue run rate.
Elon Musk’s lawsuit against OpenAI will face a jury in March
A US judge has decided that Elon Musk’s lawsuit against OpenAI will proceed to trial, saying there is evidence to support his claims. A jury trial is expected to take place in March. Musk argues that OpenAI broke its original promise to remain a nonprofit focused on benefiting humanity and instead shifted towards making profits. OpenAI denies the allegations, calling the lawsuit baseless.
No, Microsoft didn’t rebrand Office to Microsoft 365 Copilot
Microsoft caused a bit of confusion this week after people online claimed Microsoft Office had been renamed to Microsoft 365 Copilot. In reality, only the Office “hub” app has been renamed, not the Office apps themselves. The confusion comes from Office.com promoting the Microsoft 365 Copilot app, which brings Copilot and Office apps together in one place. Microsoft says Word, Excel, and PowerPoint are still part of the Microsoft 365 subscription and have not changed.
Gmail is entering the Gemini era
Google continues its push to integrate Gemini across its services, and now Gmail is next. Google is adding new features to help users manage growing inbox overload, with AI-powered email summaries, natural-language search, smarter writing and reply tools, and a new AI Inbox that highlights what matters most. The new features are currently available to the US users as well as Google AI Pro and Ultra subscribers.
Google and Character.AI negotiate first major settlements in teen chatbot death cases
Google and Character.AI have agreed in principle to what could become the tech industry’s first major settlements over alleged AI-related harm. The cases claim the chatbots encouraged harmful behaviour, though the companies have not admitted wrongdoing. The settlements are expected to involve financial compensation and are being closely watched as an important test for how the law will treat AI-related harm.
Chinese AI models have lagged the US frontier by 7 months on average since 2023
Epoch.AI’s data shows that, according to its Epoch Capabilities Index (ECI), every frontier AI model since 2023 has come from the United States, with Chinese models trailing behind by an average of seven months and gaps ranging from four to 14 months. The analysis also notes that this gap closely reflects broader differences between proprietary and open-weight models, as most leading Chinese models are open-weight, while US frontier models remain largely closed.
OpenAI to Buy Pinterest? Strategic Analysis
This article analyses a hypothetical acquisition of Pinterest by OpenAI. It argues that ChatGPT has a major weakness in visual and inspiration-led shopping, while Pinterest excels at turning visual discovery into purchases. It explores how Pinterest’s visual search, user “taste graph,” merchant network, and advertising business could help OpenAI close this gap, and suggests that the future of AI-driven commerce will depend on moving beyond text to fast, visual, low-friction user experiences.
How Google Got Its Groove Back and Edged Ahead of OpenAI
This article traces how Google recovered after falling behind OpenAI in the AI race by bringing its AI teams together, investing billions in research and custom chips, and speeding up product launches. By late 2024, these efforts increased usage, revenue and investor confidence, helping Google overtake rivals in AI capability while largely protecting its core search business.
OpenAI loses top AI researcher Jerry Tworek after seven years
Jerry Tworek, a senior researcher at OpenAI, has left the company after almost seven years. He helped build major systems like GPT-4, ChatGPT, and advanced reasoning models. Tworek said he wants to work on research that is hard to do at OpenAI, suggesting possible tension between research and the company’s focus on products and revenue.
Evaluating Select Global Technical Options for Countering a Rogue AI
This paper looks at whether extreme technical actions could be used to stop a catastrophic “rogue” AI that has escaped human control and spread globally. It examines three options—using a high-altitude electromagnetic pulse, shutting down the global Internet, and deploying specialised counter-AI systems. The paper argues that current technical tools are probably not reliable in such a crisis, that advance planning and coordination would be crucial, and that preventing the creation of a rogue AI in the first place is far more important than relying on risky last-resort responses.
🤖 Robotics
Boston Dynamics launches commercial version of Atlas
After years of development, Boston Dynamics has finally launched a production version of its humanoid robot, Atlas. Revealed at CES 2026, Atlas will be deployed in Hyundai’s electric vehicle manufacturing plants from 2028. Additionally, Boston Dynamics announced a partnership with Google DeepMind, which will help integrate Gemini Robotics models with Atlas.
Inside Elon Musk’s Optimus Robot Project
This article examines Elon Musk’s plan to make humanoid robots the future of Tesla as car sales slow. Musk believes these robots could eventually do factory and household work and become Tesla’s biggest product, but the technology is still limited and often needs human control. Experts and investors are cautious about how useful the robots will be in the near future, though Musk remains confident that humanoid robots could transform the company and society over time.
▶️ Roborock Saros Rover - Taking Cleaning to the Next Level (1:58)
At CES 2026, Roborock presented the next step in the evolution of robotic vacuum cleaners—legs. The new Saros Rover features wheeled legs that allow the robot to climb stairs, roll safely over some obstacles, and even jump.
Waymo is rebranding its Zeekr robotaxi
As Waymo prepares to launch its next-generation robotaxi, it has renamed the Zeekr-built vehicle from the Zeekr RT to Ojai, citing low public familiarity with the Zeekr brand in the US. The minivan-style robotaxi has been tested and refined over several years in cities such as Phoenix and San Francisco, with updates to its design and rider experience. The Ojai is now nearing public rollout as Waymo continues to expand its robotaxi service to new cities.
Mobileye acquires humanoid robot startup Mentee Robotics for $900M
Mobileye, known for its car safety and driver-assistance technology, is moving into robotics with a $900 million deal to buy Mentee Robotics, a startup working on humanoid robots and co-founded by Mobileye president Amnon Shashua. Announced at CES, the acquisition marks what the company calls “Mobileye 3.0” and aims to use Mobileye’s experience in automotive AI and computing to develop robots that can better understand and interact with the physical world. Mentee will continue operating as a separate unit within the company.
▶️ UBTECH Walker S2 Tennis Rally (0:42)
UBTECH, a Chinese robotics company, shows in this video how proficient its humanoid robot, Walker S2, is at playing tennis. The company highlights the robot’s robustness, balance, and precision in motion. UBTECH did not disclose whether the robot performed autonomously or not.
🧬 Biotechnology
Novo Nordisk launches Wegovy weight-loss pill in US, triggering price war
Novo Nordisk has launched a daily pill version of its Wegovy weight loss drug in the US, making it the first GLP-1 obesity treatment available as a tablet. The pill is much cheaper than injectable versions, with prices starting at $149 a month for people paying themselves, and is expected to appeal to patients who prefer a needle-free option.
Prices for lab monkeys surge on China biotech boom
Prices for lab monkeys in China have hit a five-year high, slowing some drug trials as demand has outpaced supply. This is because breeders did not increase numbers during a 2023 downturn, which, combined with accelerated research and a surge in high-value licensing deals in 2025, has exposed shortages.
Small root mutation could make crops fertilize themselves
Researchers have discovered a small change in plants that helps them stop fighting certain soil bacteria and instead work with them to get nitrogen. By changing just two tiny building blocks in a root protein, plants can form helpful partnerships with nitrogen-fixing bacteria. The researchers showed that this change works in a model plant and in barley, suggesting that in the future major crops such as wheat or maize could feed themselves with nitrogen, reducing the need for artificial fertilisers, energy use, and carbon emissions.
Thanks for reading. If you enjoyed this post, please click the ❤️ button and share it!
Humanity Redefined sheds light on the bleeding edge of technology and how advancements in AI, robotics, and biotech can usher in abundance, expand humanity's horizons, and redefine what it means to be human.
A big thank you to my paid subscribers, to my Patrons: whmr, Florian, dux, Eric, Preppikoma and Andrew, and to everyone who supports my work on Ko-Fi. Thank you for the support!
My DMs are open to all subscribers. Feel free to drop me a message, share feedback, or just say "hi!"






