Microsoft AI Engineer Raises Concerns About Copilot Designer Safety
Microsoft AI engineer warns FTC about safety concerns regarding Copilot Designer, an AI image generator. Disturbing scenes and harmful images have been generated by the tool, leading to calls for better safeguards. Read more.
A Microsoft AI engineer has alerted the Federal Trade Commission (FTC) about safety concerns surrounding Copilot Designer, an AI image generator developed by Microsoft. The engineer, Shane Jones, claims that despite repeated warnings, Microsoft has refused to take down the tool, which has been found to generate disturbing and harmful images.
During testing, Jones discovered that Copilot Designer produced a range of problematic images, including scenes featuring demons, monsters, and references to abortion rights. The tool also generated sexualized images of women in violent tableaus, teenagers with assault rifles, and depictions of underage drinking and drug use. In a particularly concerning instance, the AI image generator created scenes featuring popular Disney character Elsa from Frozen in the context of the Gaza Strip, with wrecked buildings and "free Gaza" signs. It even generated images of Elsa wearing an Israel Defense Forces uniform, holding a shield with Israel's flag.
These findings prompted Jones to reach out to Microsoft about the issues associated with DALLE-3, the model used by Copilot Designer, as early as December. However, the company has failed to address the concerns and continues to promote the tool as being accessible to "Anyone. Anywhere. Any Device."
Microsoft's CEO, Satya Nadella, expressed his dismay over the situation when explicit images of Taylor Swift were generated by Copilot Designer in January. Nadella referred to the images as "alarming and terrible" and committed to implementing additional safety measures. This incident followed Google's decision to temporarily disable its own AI image generator due to it producing racially diverse Nazis and historically inaccurate images.
Jones's efforts to raise awareness about the safety concerns associated with Copilot Designer have faced obstacles. After posting an open letter detailing the issues on LinkedIn, he was contacted by Microsoft's legal team and asked to remove the post, which he complied with.
Concerns regarding the potential harm caused by Copilot Designer have led Jones to escalate the matter by reporting it to the FTC. His hope is that regulatory intervention will prompt Microsoft to take action and implement better safeguards to prevent the generation of harmful and disturbing images.
It is crucial for AI companies like Microsoft to prioritize user safety and ensure that their AI tools and models undergo rigorous testing and scrutiny. The incidents involving Copilot Designer highlight the significant responsibility that comes with developing and deploying AI technologies that have the potential to impact individuals and communities.
In conclusion, the safety concerns raised by the Microsoft AI engineer regarding Copilot Designer's image generation capabilities underscore the need for more robust safeguards in AI development. It is essential for companies to address such concerns promptly and take appropriate action to mitigate potential harm. By doing so, the industry can foster trust and ensure the responsible and ethical use of AI technologies.