After engineer's complaints, Microsoft blocks terms that made violent, sexual images on Copilot Designer
1 min. read
Updated on
Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more
Key notes
- Microsoft engineer exposed safety issues with the AI image generation tool, Copilot Designer.
- The tool could create violent content, sexualized images, and biased material.
- Microsoft is patching the tool with blocked prompts, warnings, and improved safety filters.
Microsoft is working to fix its AI image generation tool, Copilot Designer after it was revealed by a company engineer, Shane Jones. Jones, tasked with testing the tool’s safety, discovered that it could be used to generate disturbing content.
This included violent scenes involving teenagers, sexualized images, and biased content on sensitive topics. As we reported earlier, the tool disregarded copyright, churning out images of Disney characters in inappropriate situations.
Jones began reporting these issues internally in December 2023. While Microsoft acknowledged the problems, they didn’t take the tool offline. Jones decided to take the matter to a higher level. He contacted OpenAI and U.S. senators. Finally, he sent letters to the FTC and Microsoft’s board.
Microsoft has responded with initial steps. Certain prompts are now blocked, users receive policy violation warnings, and safety filters are being improved.
This incident exposes the challenges of AI image generation. Powerful as it may be, such technology requires strong safeguards. It also raises questions about internal communication and the responsiveness of tech giants to ethical concerns.
Can Microsoft regain trust? Only time will tell if their actions, spurred by Jones’ bravery, will lead to a more responsible approach to AI development.
More here.
User forum
1 messages