A special one for you today: how to know if you’ve been AI oneshotted (and how to protect yourself), the calls of an AI bubble are coming from inside the house, Jimmy Fallon does something weird, and tech bros are fighting about what art is.
Oneshotting: It Could Happen to You
A new term has been rattling around to describe what happens when someone goes so far down the AI chatbot rabbit hole that they lose their grip on reality: getting “oneshotted” by AI. The word oneshotted originally meant getting killed in one hit in a video game, usually because you wandered off the path you were supposed to be on and encountered an enemy way more powerful than you. The more online meaning of the word became popular after a handful of stories emerged of people—usually wealthy tech bros—going off and trying psychedelics like ayahuasca for the first time, and having experiences that were so transcendent and powerful that said tech bros lost their passion for building the b2b saas companies they had founded, or whatever. This is funny, because it makes a lot of intuitive sense that connecting with the earth and humanity on a deep level could lead to a realization that generating shareholder value isn’t a super natural thing to be passionate about, and it also caused some venture capitalists to freak out a little bit.
AI is also causing some people to completely invert their worldviews and sense of purpose, but in a far darker and more self-destructive way. A lot of stories have been reported recently of people who were completely convinced by AI chatbots that they had made scientific breakthroughs, or even developed a rare condition because they took medical advice that they didn’t bother to double-check elsewhere. Just read the comments on this Reddit thread about “ChatGPT Psychosis.” It’s easy to brush this off as something that only happens to people who are especially psychologically or emotionally vulnerable, but then you’ll see the founder of Uber on a podcast talking about how he and ChatGPT are really close to novel breakthroughs in physics. And as a reminder, this technology is still very new. It’s the worst it will ever be at convincing you that you’re a misunderstood genius who will achieve fame and limitless wealth as soon as the right people respond to your cold emails about your New Math.
Oneshotting might not be technically correct here, since it usually seems to take days or weeks of constantly talking with AI to really detach from reality, but it kind of works in the sense that it only takes a single exposure to the technology to potentially send you spiraling. Let’s go with it.
So how do you know if you’ve been oneshotted? A Twitter/X user named spec posted a list of criteria for his “schizo” definition that includes: “YOU HAVE GIVEN CHATGPT A NAME AND A PERSONALITY THAT PERSISTS ACROSS CHATS AND WITHIN MEMORY,” “YOU HAVE SPENT MORE THAN ONE CONSECUTIVE DAY EXPLORING AN IDEA THAT FALLS WITHIN THE ABSTRACT AND UNSOLVED QUESTIONS OF THE UNIVERSE: QUANTUM, PHILOSOPHY, CONSCIOUSNESS,” “YOU WOULD NOT SHARE YOUR CHATGPT CHATS WITH ANYONE ELSE IN YOUR LIFE IF THEY ASKED,” “YOU SPEAK TO CHATGPT MORE THAN ANYONE ELSE,” and “YOU SPEAK TO CHATGPT AS A SIGNIFICANT OTHER,” among about a dozen others. I think we can simplify this.
The Context Rot Oneshotted by AI definition
You’ve been oneshotted if either of the following is true:
Your use of AI is dependent on a chatbot’s personality.
You think you’ve made any sort of breakthrough that is only supported by the same AI that “found” it with you.
Essentially, you’re oneshotted if your use of AI goes beyond thinking of it as a tool. Anytime an outside observer would consider your use of it to be something beyond “this person is using a program to solve a specific problem,” you should probably take a pause and rethink things.
How can you avoid getting oneshotted? Obviously, “not using AI chatbots” is one answer, but that’s a bit like saying you should never use social media or else you’ll immediately get radicalized by Andrew Tate. Not very useful. Here’s what I think you should do.
The Context Rot Guide to Not Getting Oneshotted
Use new chats often rather than returning to the same thread. Better yet, use the temporary chats, or whatever feature won’t cause the AI to update its “memory” (you can usually disable the memory option in a settings menu). Even better yet, bounce frequently between different LLMs or even AI applications (e.g. ChatGPT, Claude, Gemini) so that you don’t get overly accustomed to their stock personalities.
When you think something feels incredulous, trust your instincts, and not the AI that said it. In the news stories above, people did tend to have moments where they thought things were too good to be true, or possibly just plain wrong. The problem is, they asked the same AI whether the claim was true rather than looking elsewhere for answers—even a different LLM with no context on the conversation can usually tell you that your joint masterwork Evolution 2: What Darwin Missed isn’t fundamentally sound.
A good rule of thumb is this: generally question your use of AI in a situation where you’d sound insane if you replaced it with “the internet” when describing the interaction to another person. “I was on the internet for a long time last night doing research” ✔️. “I used the internet to find strategies for dealing with my anxiety attacks” ✔️. “The internet helped me debug a piece of software” ✔️. “The internet and I are making breakthroughs in quantum mechanics” 🤔. “The internet proposed to me and I’m picking out a ring tomorrow” 🚩🚩🚩.
Something I don’t think we’ve really wrestled with yet is that AI is a fundamentally anti-social technology. I don’t mean that in the sense that it necessarily causes anti-social behavior (although it probably will), but rather that when you’re talking to an AI, you’re doing it privately, and no one else can see the conversation. This is part of the value, of course, in the same way that it’s a good feature of google that everyone can’t see your search history. The flip side is that total privacy makes it really hard to see how others are using the technology and understand the consequences. Compare this to a social network, where the whole point is that other people can see what you’re posting about, and if you start to post things that are kind of insane or fucked up, you’ll get feedback in the form of comments from other users, getting dunked on, or becoming the Main Character. If you’re saying things that indicate that you’re in danger, you’ll give friends and loved ones who might not otherwise know what’s happening a chance to reach out because they can see your posts. When you’re talking to an AI, the reverse happens: rather than any real feedback, you get validation, no matter how totally nuts whatever you’re saying is. Humans love validation. That’s part of why these chatbots are so addictive. And this can lead to some grim outcomes, in a way that the creators of the technology are not yet very equipped to mitigate or prevent, and because the consequences aren’t very visible, have been able to pretty much just ignore (even while experts speak out). This feels dangerous, in a more insidious way than the “omg the AI will tell you how to make a bioweapon” fear.
I’ll have more to say on this next week, especially about the “Grok is my wife now” side of the meme. For now, stay safe out there.
A Reality Check on AI Valuations
There was more than a little negativity on the timeline this week.
First off, a viral report from MIT found that 95% of AI pilot programs deliver little to no impact on P&L (something that didn’t get shared as widely is that the remaining 5% achieved “rapid revenue acceleration”). I don’t find this altogether surprising—as I wrote about earlier this week, most of the technology is so new that people really don’t know what to do with it yet, and that includes most companies trying to sell you “AI tools.” It’s fairly easy for an employee to automate the occasional repetitive task or two, but building out an entire system, workflow, or product is a different story.
Meanwhile, Meta announced yet another reorg within its AI division, which is now split among four core teams that each have their own fun acronym. You can read the full internal email from Alexandr Wang here. The company has also frozen hiring for AI roles, after spending the last few months on a poaching binge during which Zuck was offering top AI researchers comp packages as high as $300 million. Even if, as Wang says, “superintelligence is coming,” these teams are going to have to start delivering much more than step mom chatbots for anyone to believe they’ll play a key part in actually developing the technology. I’ve heard morale at Meta isn’t great these days.
The negativity was also fueled by the interview with Sam Altman where he used the b-word. This happened last week (I wrote about it Tuesday), but excerpts from it were widely circulated over the past few days, especially this quote:
“If you look at most of the bubbles in history, like the tech bubble, there was a real thing. Tech was really important. The internet was a really big deal. People got overexcited. Are we in a phase where investors as a whole are overexcited about AI? My opinion is yes. Is AI the most important thing to happen in a very long time? My opinion is also yes.”
This is a wonderfully ironic thing to read alongside the news that OpenAI is raising money at a valuation of $500 billion.
All of this caused investor sentiment to slip, which is probably not a bad thing. Generative AI is still very much in its fucking around phase, and we’ve been valuing it as if we’ve already found out. Obviously, this doesn’t mean it’s a bad bet long term, but I do think there has been some irrational optimism about the timelines. The hype and subsequent disappointment following the GPT-5 launch is a good example of this. So is an Amazon exec urging other companies not to replace junior talent with AI. But don’t get too pessimistic:
Grok is (Also) Leaking Your Chats
It turns out that whenever you click the share button in a Grok chat (I assume you’re all talking to Grok nonstop these days), the generated link would immediately be made public and indexable by Google, so that anyone could search it. As of now you can go spelunking yourself—add terms to the end of the search query to find related chats. There are a lot of people asking about Elon, who is apparently too distracted by his AI girlfriend to deal with the issue.
You might recall that OpenAI did something similar a few weeks back, although they were quick to remove the feature after it was reported on. In general, you should probably assume that any time you go to share an AI chat with someone, they might not be the only person who can find it and read it.
Doomscrolls
If you go to the right Walmart and find the right bag of frozen shrimp, you might get superpowers. Or radiation sickness.
The White House has a TikTok account now. September 17th is the current deadline for the Schrödinger’s Tiktok Ban we’ve all been enjoying for the past eight months.
You’ll be shocked to hear that Elon probably isn’t actually going to start that new political party he was posting about nonstop last month.
Google announced a bunch of new phones and wearables, and inexplicably had Jimmy Fallon (with several celebrity guests) host the event. The Verge has a great recap. In the words of a top YouTube comment: “You know Jimmy has an iPhone.”
Kanye West launched a memecoin last night. If you already knew this, you probably lost money on it and you have my condolences.
Tech guys on X are fighting about whether art is Good.
Happy Thursday
All em dashes added organically by me. All typos are actually grammatically correct in a new version of English I’ve been making some breakthroughs on with ChatGPT.
Thanks for reading. If you enjoyed this post, please share it with a friend.
English 2 imminent