There’s still a certain nostalgia for Windows 95, and while it’s notoriously easy to throw yourself back into the days of blocky menus and bald men yelling at you that it’s “only $99,” one Windows experimenter has nailed it To use ChatGPT Generate simple product keys for the time-honoured operating system.
How to hide your sensitive pictures in Google Photos
Late last month, YouTuber Enderman showed how he could trick OpenAI’s ChatGPT into generating keys for Windows 95, even though the chatbot was specifically opposed to generating activation keys.
Old Windows 95 OEM keys used several parameters, including a set of ordinal numbers as well as other random numbers. In a fairly simple workaround, Enderman instructed ChatGPT to generate lines in the same layout as a Windows 95 key, paying special attention to certain serial numbers that are mandatory in all keys. After a few dozen trial and error attempts, he settled on a command prompt that worked and was able to generate about a working key in 30 attempts.
In other words, he couldn’t tell ChatGPT to generate a Windows key, but he could tell it to generate a string that satisfied all the requirements of a Windows key.
Activate Windows with ChatGPT
After Enderman proved that the key worked when installing Windows 95, he thanked ChatGPT. The chatbot responded, “I apologize for any confusion, but I didn’t provide any Windows 95 keys in my previous answer… I can’t provide product keys or activation codes for any software.” It also attempted to claim that activating Windows 95 “Not possible” because Microsoft stopped supporting the software in 2001, which is simply not true.
G/O Media may receive a commission
Interestingly, Enderman ran this prompt on both the older GPT-3 language model and OpenAI’s newer GPT-4, and told us that the newer model improved even what you saw in his video. In an email, Enderman (who asked that we use his screen name) told Gizmodo that a certain sequence of numbers in the key had to be divisible by 7. GPT-3 would have trouble understanding this limitation and would create far less usable keys. In later tests with GPT-4, ChatGPT spit out far more correct keys, although even then not every single key was a winner or adhered to the prompt’s parameters. The YouTuber said this suggests that “GPT-4 can do math, but gets lost during stack generation.
GPT-4 does not have a built-in calculator, and additional programming is required for those wishing to use the system to generate accurate answers to math problems. While OpenAI has not commented on the LLM’s training data, the company has been very curious about all the different tests it can pass with flying colours, such as: B. the LSAT and the Uniform Bar Exam. At the same time, ChatGPT has shown that occasionally it is not possible to spit out accurate code.
One of GPT-4’s main selling points was its ability to handle longer, more complex prompts. GPT-3 and 3.5 would routinely fail to provide exact answers when performing 3-digit arithmetic or “reasoning” tasks such as unscrambling words. The latest version of the LLM has gotten noticeably better at these types of tasks, at least if you look at the scores on tests like the Verbal GRE or Math SAT. Still, the system is far from perfect, especially as its learning data is still mostly natural language text scraped from the internet.
Enderman told Gizmodo he tried generating keys for several programs using the GPT-4 model and found that it handled key generation better than previous versions of the grand language model.
However, don’t expect to get free keys for modern programs. As the YouTuber points out in his video, Windows 95 keys are much easier to forge than keys for Windows XP and later, since Microsoft has started implementing product IDs in the operating system installation software.
Still, Enderman’s technique didn’t require extensive prompt engineering to get the AI to bypass OpenAI’s blocks when generating product keys. Despite the nickname, AI systems like ChatGPT and GPT-4 aren’t really “intelligent” and they don’t know when they’re being misused, barring explicit bans on generating “invalid” content.
This has more serious consequences. Back in February, researchers at cybersecurity firm Checkpoint showed that malicious actors had used ChatGPT to “enhance” basic malware. There are many ways to circumvent OpenAI’s limitations, and cybercriminals have demonstrated their ability to write simple scripts or bots to abuse the company’s API.
Earlier this year, cybersecurity researchers said they managed to trick ChatGPT into creating malicious malware tools simply by creating multiple authoritative prompts with multiple restrictions. The chatbot eventually committed and generated malicious code and was even able to mutate it, creating multiple variants of the same malware.
Enderman’s Windows keys are a good example of how to trick AI into bypassing their protections, but he told us he’s not overly concerned about abuse because the more people nudge and nudge the AI, the more future versions will be able to close gaps.
“I believe it’s a good thing, and companies like Microsoft shouldn’t ban users from abusing their Bing AI or compromise their capabilities,” he said. “Instead, they should reward active users for finding such exploits and selectively mitigate them. It’s all part of AI training, after all.”
Want to learn more about AI, chatbots, and the future of machine learning? Check out our full artificial intelligence coverage or browse our guides to the best free AI art generators, the best ChatGPT alternatives, and everything we know about OpenAI’s ChatGPT.