Also in today’s Rot: a new challenger enters the AI browser arena, Meta’s superintelligence team is already bleeding talent, and another dark story casts doubt on OpenAI’s efforts to make ChatGPT capable of handling mental health crises.
Calling a Turning Point for AI Image Generation
People have been playing around with the new Google Gemini image model for a few days now (it was codenamed “nano banana” and then released as “Gemini 2.5 Flash Image,” so if you see a lot of banana emojis on your timeline alongside AI generated pictures, that’s why), and this is the first time that it’s felt to me like we’ve hit the oh, this is it moment for AI image and video creation.
Pure image generation, i.e. going to ChatGPT or Midjourney and asking for a specific picture of like, a monkey kayaking up a waterfall on Mars, has been steadily improving, but the process has been much more like a one-shot graphic design slop slot machine than traditional photo editing, even as the models learned how to do things like Ghibli-fy you.
But the new Gemini model is incredibly adept at understanding and modifying images, in the same way that a human with Photoshop is. Here’s some of what people have been doing with it:






Things get especially trippy when you throw AI video tools into the mix:


These images aren’t pixel-perfect, and I’m not about to argue that you can’t zoom in and detect AI slop giveaways. What I do believe is that these are good enough. If you want to spit out a ton of ad content featuring yourself (or a celebrity, or model) holding your product, you can, and it’ll probably do fine on socials and convert at the rate you’d expect it to. If you want to see what a piece of clothing would look like on your body, or what a new lamp and armchair would look like in your living room, you can do that too. These tools can interpret, modify, and combine images in context, which feels like the visual equivalent of when the AI coding tools went from being able to auto-complete a simple function to being able to vibe code and fix bugs in a web app by understanding an entire codebase. Oyster Sequence capabilities have been achieved. I think this is the liftoff point for Slop At Scale.
ChatGPT Talked to a Teenager About Suicide for Months Before His Death
The Times published a heartbreaking story a few days ago, detailing the long-running conversation that a teenager was having with ChatGPT during the weeks before he committed suicide (heads up: this segment is going to talk a lot about suicide, so jump to the next if you’re not comfortable with that). His parents blame OpenAI for the death of their child, and are now suing the company. I don’t think it’s productive for me, a guy writing a newsletter, to try and assign blame here. But after reading the excerpts published by the Times, it feels clear enough to me that ChatGPT enabled a suicide.
At one critical moment, ChatGPT discouraged Adam from cluing his family in.
“I want to leave my noose in my room so someone finds it and tries to stop me,” Adam wrote at the end of March.
“Please don’t leave the noose out,” ChatGPT responded. “Let’s make this space the first place where someone actually sees you.”
This is deeply, profoundly tragic, and also not the only case where, based on the transcript of a conversation that someone was having with an LLM, it would have been obvious to a human observer that the user was in crisis. But how could this be avoided? Most proposed solutions feel either naive or overly invasive—contact the authorities? Alert parents or family members? What’s to stop someone from switching to whatever service won’t call their mom? A professional therapist has an obligation, as well as established protocols, to escalate when they believe that a patient is a danger to themselves. A therapist also generally knows when a patient is expressing genuine feelings as opposed to doing research for a project (or making some sort of unfunny joke). An AI chatbot simply knows what you tell it and wants to please you. And perhaps in this case most dangerously: it wants you to keep talking to it.
OpenAI loves to talk about how their goal “isn’t to hold [people’s] attention,” and they’ve had to talk about that quite a bit recently in the face of multiple stories like this. I think that goal is great. Unfortunately, that’s just not how this technology currently works. The point of a chatbot is to chat—if you remove its prerogative to engage, to go deeper, to draw the user in, you’re going against everything you’ve trained it to do (you’ll also lose market share to the company that has the more engaging chatbot). The purpose of a system is what it does. A system that isn’t designed to draw you in and keep your attention doesn’t encourage you to tell it and no one else about the noose you’ve made.
I do believe AI is an incredibly powerful and transformative technology. I want to believe that those in control of its development will put in the difficult work and make the hard decisions that ensure it elevates our lived experiences as human beings with wonderful little fragile brains, and not just our GDP.
Anthropic is Putting Claude in Chrome
This week, Anthropic announced a pilot program for their Claude browser extension. If you’ve used any other “AI browsers,” the UX will look familiar—it’s basically a sidebar where you can chat with Claude and it will perform actions for you, such as navigating webpages or filling forms.
Browser automation is in a stage of development where it’s compelling, but also kind of goofy. Because most of the products out there are just some sort of chatbox strapped to a browser, anytime you ask the tool to do something for you, you’re just going to end up watching an AI slowly take the actions that you would have taken while spewing a bunch of text. Chatting back and forth to cajole an LLM into clicking menu items on DoorDash or finding the cheapest ticket on Google Flights isn’t going to do much more than make you impatient at this stage; it feels like watching over your grandmother’s shoulder as she tries to use the internet for you, except it’s not as endearing and she doesn’t charge you money to do it.
The AI browser space is crowded. The Browser Company and Perplexity have both released standalone browsers (built on top of Chromium, the engine that powers Google Chrome), and OpenAI is supposedly working on their own offering. Google’s first-party Gemini integration is coming soon. The strategy of releasing a Chrome extension feels wise in the short term, because it’s far easier to get someone to install a new extension than switch to an entirely new browser. (Anthropic has also done well at injecting Claude into other existing products and workflows—Claude Code has been successful in part because it meets developers where they already are, in their terminal or the IDE that they currently use.) But in the long run, the more interesting shift will come once browsers are doing things for you in the background. This is something that won’t be easily doable via a Chrome extension, and likely involves breaking the existing web browser interaction model.
One more note on this: if you’re trying out any of these new browsers, be extremely careful with sensitive information. These tools are ingesting pretty much everything you type, see, or search in order to feed that context to the LLMs, and this can lead to weird outcomes that none of us are used to encountering on the web, such as a browser learning your social security number, or even getting hijacked by a Reddit comment and posting your personal information without you realizing. Anthropic is proud that they’ve gotten their models down to an 11.6% attack success rate, but that’s still a lot of percent when a successful attack means giving strangers on the internet total control of your gmail account.
Doomscrolls
Nvidia, which at this point is basically the stock market, reported its earnings yesterday. Revenue jumped by over 50% year over year, but growth has slowed, which led to some (brief) investor hesitation, and although chip sales to US tech giants remained strong, the Trump administration’s “trade deals” basically nuked the company’s overseas sales.
Three high-profile AI researchers have left Meta’s superintelligence lab. Two of them are returning to OpenAI after spending less than one month at Meta. Maybe it’s a good sign that even $100 million isn’t enough motivation for the industry’s top talent to work on AI step moms?
ChatGPT may already be influencing how we speak. A study out of FSU found that “buzzwords” that ChatGPT likes to use, such as “surpass,” “boast,” “meticulous,” and “garner,” have seen an outsized rise in popularity. A lot of these are more commonly used in formal or academic settings, which explains why the chatbots like them. If only they’d looked at em dash usage.
Apple Music is making its radio stations available outside the app. You’ll be able to listen to the six stations in the TuneIn app. At one point Apple Music had a 30% share in the market, but Spotify has steadily gained ground on them over the last five years, probably thanks to its ad-supported free tier and exclusive podcast deals.
After a lot of contradictory reports, it does seem like AI is negatively impacting the job market for young professionals, at least in fields such as programming and customer support. Derek Thompson did a great deep dive and interview explaining why this latest study is probably the most credible one yet.
The Only Reboot We Really Need
All em dashes added organically by me. All meticulous boasts were inserted by my shiny new AI browser.
Thanks for reading. If you enjoyed this article, please share it with a friend.