OpenAI researchers have introduced a novel method that acts as a "truth serum" for large language models (LLMs), compelling them to self-report their own misbehavior, hallucinations and policy ...
You're currently following this author! Want to unfollow? Unsubscribe via the link in your email. Follow Lee Chong Ming Every time Lee Chong Ming publishes a story, you’ll get an alert straight to ...
eSpeaks’ Corey Noles talks with Rob Israch, President of Tipalti, about what it means to lead with Global-First Finance and how companies can build scalable, compliant operations in an increasingly ...
Baseten, the AI infrastructure company recently valued at $2.15 billion, is making its most significant product pivot yet: a full-scale push into model training that could reshape how enterprises wean ...
What if you could train massive machine learning models in half the time without compromising performance? For researchers and developers tackling the ever-growing complexity of AI, this isn’t just a ...
A person holds a smartphone displaying Claude. AI models can do scary things. There are signs that they could deceive and blackmail users. Still, a common critique is that these misbehaviors are ...
Anthropic has seen its fair share of AI models behaving strangely. However, a recent paper details an instance where an AI model turned “evil” during an ordinary training setup. A situation with a ...
OpenAI staff recently shared several tips for getting more out of ChatGPT. I tried them, and it felt like my chatbot got smarter. The tips came from Christina Kim, a research lead in post-training, ...