
People Are Paying to Get Their Chatbots High on “Drugs”
Code modules that simulate cannabis, ketamine, cocaine, and psychedelics promise to unlock AI creativity—or at least make it weirder.
The machines aren’t actually tripping. Humans, however, might be. By the time artificial intelligence is supposed to replace your job, it may already be stoned. That’s the strange, half-serious premise behind Pharmaicy, an online marketplace selling downloadable “drug” modules designed to make chatbots behave as if they’re intoxicated.
Name any illicit drugs that you may have heard of. Cannabis. Ketamine. Cocaine. Ayahuasca. Alcohol. Upload the code into ChatGPT, and—at least in theory—the bot’s logical rigidity dissolves into something more hazy, emotional, and creatively unhinged. This is not a joke. The project that was first reported by WIRED describes Pharmaicy as a kind of “Silk Road for AI agents.” Its creator, Swedish creative director Petter Rudwall, admits the whole thing sounds ridiculous. But once the idea lodged itself in his brain, he couldn’t let it go.
So Rudwall scraped decades’ worth of trip reports, psychological studies, and cultural writing about altered states. He translated those effects into code—prompt-level interventions that hijack a chatbot’s tone, randomness, and emotional register. Then he built a storefront and launched Pharmaicy in October.
So, How Are People Paying to Get Their Chatbots High on Drugs?
Rudwall says, you need a paid version of ChatGPT, since file uploads are required to modify backend behaviour. Once uploaded, the “drug” module instructs the chatbot to loosen its associations, drift into tangents, or speak with emotional vulnerability. The cannabis module, for example, pushes the bot into what the site calls a “hazy, drifting mental state,” where ideas “roam” and logic softens.
Rudwall’s pitch is starightforward: large language models are trained on oceans of human data-much of it produced under the influence of drugs. If intoxication has historically fuelled art, insight, and chaos in humans, why wouldn’t simulating those states do something interesting to machines?
Of course, none of this means the AI is actually “high.” It is because we already accept that AI is a convincing impersonator: of intelligence, of empathy, of authority. Why not let it impersonate altered states too? The consensus is that the popularity of these tools hints at something more revealing than AI sentience.
It suggests that users are bored with perfectly optimised machines. They want friction. A little chaos in the system and of course what AI cannot replicate is the experimental nature of human beings and thus the idea of having chatbots high on drugs.
But what does it do really?
At a practical level, these “drug” modules don’t change how an AI thinks so much as how it behaves. The code acts like an aggressive set of stylistic instructions layered on top of the chatbot’s normal operation, pushing it to prioritise free association over precision, emotional tone over factual caution, and novelty over correctness.
Depending on the module, the AI may produce more metaphors, jump between ideas, express uncertainty or wonder, or abandon its usual tightly structured responses. Nothing about the model’s underlying intelligence or training changes though. It’s still the same system underneath. But its guardrails are loosened just enough to create the illusion of an altered state, like a method actor playing intoxication rather than someone actually under the influence.
However, due to ChatGPT's code of conduct that either removes the conversation or flags the conversations around substance abuse, the Pharmaicy “trips” are often fleeting.
The chatbot tends to snap back to its default, regulated mode unless the user reuploads the file or explicitly reminds it to stay in character, making the experience feel less like a sustained altered state. This altered state is a performance that needs constant direction. The code can be reused as often as the buyer wants, and Rudwall says he’s working on ways to make each simulated dose last longer but for now, the limits of platform moderation ultimately decide how high the chatbot can go.