Microsoft Study Highlights Vulnerabilities in GPT-4
A recent scientific paper affiliated with Microsoft has examined the trustworthiness and potential toxicity of large language models (LLMs), specifically focusing on OpenAI’s GPT-4 and its predecessor, GPT-3.5. The study suggests that while GPT-4 is generally more reliable than GPT-3.5 in standard situations, it is more susceptible to being manipulated by “jailbreaking” prompts designed to bypass its safety… Read More »