
According to a report by The Information, OpenAI is utilizing a customized version of ChatGPT to identify potential internal leakers by analyzing communications within Slack and email. This specialized version is reportedly employed by OpenAI's security team when internal information becomes public. The team inputs the leaked content into ChatGPT, which is equipped with access to internal documents and communications. The system then attempts to trace the source of the leak by pinpointing documents or communication threads that contain the leaked information and identifying who had access to them.
At this time, it remains unclear whether this method has successfully identified any leakers. Specific details about what distinguishes this version of ChatGPT remain undisclosed. However, there is a hint that OpenAI engineers have developed an AI agent capable of complex data analysis using natural language, which could potentially fulfill this role. This agent is designed to tap into institutional knowledge stored in various platforms, including Slack and Google Docs. The architecture of this AI agent was recently showcased by OpenAI engineers, suggesting its potential application in internal security operations.