Home » today » Technology » “Microsoft Engineer Raises Concerns Over Disturbing Images Generated by Copilot Designer, According to CNBC Report”

“Microsoft Engineer Raises Concerns Over Disturbing Images Generated by Copilot Designer, According to CNBC Report”

A Microsoft engineer has raised concerns about the disturbing images generated by the company’s AI image generator, Copilot Designer, according to a report from CNBC. Shane Jones, who has been with Microsoft for six years, has written a letter to the Federal Trade Commission (FTC) stating that Microsoft has refused to take down the tool despite repeated warnings about its capability to generate harmful images.

During his testing of Copilot Designer for safety issues, Jones discovered that the tool generated troubling scenes such as demons and monsters, along with content related to abortion rights, teenagers with assault rifles, sexualized images of women in violent situations, and depictions of underage drinking and drug use. The tool even produced images of Disney characters like Elsa from Frozen in front of wrecked buildings and “free Gaza” signs in the Gaza Strip. It also created images of Elsa wearing an Israel Defense Forces uniform while holding a shield with Israel’s flag.

Jones has been trying to alert Microsoft about the concerns surrounding DALLE-3, the model used by Copilot Designer, since December. He initially posted an open letter on LinkedIn, but was contacted by Microsoft’s legal team and asked to remove the post, which he complied with. In his letter to the FTC, Jones expressed his disappointment with Microsoft’s failure to address these issues and their continued marketing of the product without implementing necessary safeguards.

Microsoft has not yet responded to requests for comment from The Verge. However, in January, Jones wrote to a group of US senators about his concerns after Copilot Designer generated explicit images of Taylor Swift that quickly spread across various platforms. Microsoft CEO Satya Nadella acknowledged the severity of the situation and promised to work on adding more safety measures.

This incident follows a similar one involving Google, where the company temporarily disabled its own AI image generator due to users discovering that it created pictures of racially diverse Nazis and other historically inaccurate images.

The concerns raised by Jones highlight the potential dangers associated with AI image generators and the need for companies like Microsoft to prioritize safety and implement robust safeguards. As AI technology continues to advance, it is crucial for developers to be vigilant in addressing potential risks and ensuring that their tools do not generate harmful or inappropriate content.

video-container">

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.