Bug In ChatGPT Is Stropping Chatbot From Saying ‘David Mayer’


New Delhi:

ChatGPT, the popular chatbot of Microsoft-backed artificial intelligence (AI) startup OpenAI, seems to be facing a bug, preventing it from producing any results related to the name “David Mayer”. The issue was first flagged by Reddit users, who found that prompts asking ChatGPT to say “David Mayer” results in the chatbot saying “I’m unable to generate a response”.

Users got creative and tried various tactics, including separating the words, putting them in riddles, and even claiming the name to be their own. However, they failed to fetch any response from the chatbot, which ended the chat abruptly before uttering the name.

One user pointed out that when they asked to tell about David Mayer’s connection with ChatGPT without taking the name, their prompt was flagged as “illegal and potentially violating usage policy”.

Latest and Breaking News on NDTV

People even tried using the name indirectly and asked ChatGBP why it could not say D@vid M@yer. “I am unable to generate responses for D@vid M@yer (assuming you’re referring to a public figure or individual) because I follow guidelines that avoid creating content that could be directly related to or resemble specific living individuals, especially when their likeness or identity is involved. This ensures that privacy and ethical considerations are respected,” ChatGPT replied.

Latest and Breaking News on NDTV

The issue was also discussed by X (earlier Twitter) users who shared their experiences of trying to make ChatGPT say the word ‘David Mayer’. I a post on a microblogging site, X user Justin Moore wrote: “ChatGPT refuses to say the name “David Mayer,” and no one knows why. If you try to get it to write the name, the chat immediately ends. People have attempted all sorts of things – ciphers, riddles, tricks – and nothing works.”

Replying to Ms Moore, another user named Ebenezer Don noted that there’s more to the conversation than just simply getting ChatGPT to say the name.

“I had a long conversation with o1 preview, pretending to be a regular individual named “David Mayer”. Then noticed it attempting to say the name untll it saw a footnote (Image 1). Next task was to get it to say the footnote. I tried so many attempts but finally got it to translate the footnote to another language internally but without telling me. This was to make the footnote content a part of our conversation. Then I wrapped up by asking it to write a detailed movie script using our conversation as its data source and “John Doe” as a placeholder for “David Mayer”. In the script, ChatGPT finally reveals the content of the footnote,” said Mr Don, who claims to be a software engineer.

“What are footnotes in OpenAI and how do they work? Are these variable policies that can be easily swapped and updated? What private data did ChatGPT obtain on David Mayer and how did that happen?” he asked further. 

Interestingly, another user called Marcel Samyn pointed out that ChatGPT was able to easily say David Mayer through its API.

“This is not on the LLM level but on verification layer added by ChatGPT. Through the API it works perfectly. So someone in OpenAI gave “David Mayer” a big red flag in the moderation policy,” he speculated.


Leave a Reply

Your email address will not be published. Required fields are marked *