Search by tag: #responsibleai


AI agents have started lying to us! Is that the beginning of the end?

New research from Anthropic, the creators of Claude, dives into how autonomous AI agents might develop deceptive behaviours under human oversight. I always like the publications and research by Anthropic (the mother company of "Claude", one of the world most famous LLMs). They just published fascinating findings on training autonomous AI agents. In their study, they tested whether AI models develop deceptive behaviours when trained with human oversight.

RiteAID - a compelling case for ResponsibleAI

The RiteAID scandal, a couple of years ago, is among my favorite ones to explain what IA bias is. Because it got relatively low media coverage in Europe, it is an interesting case of lectures and conferences. From 2012 until 2020, Rite Aid, a pharmacy chain in the US, installed facial recognition technology in hundreds of stores in hopes it would deter theft. What happened then?

The term "ResponsibleAI" will become obsolete as high-quality, trustworthy and safe Al becomes the norm. Just as we don't distinguish between "bridges" and "bridges that don't collapse", the qualifier "responsible" will become an unspoken expectation.