Home » today » Technology » “Microsoft Worker Raises Concerns Over AI Tool Creating ‘Sexually Objectified’ Images”

“Microsoft Worker Raises Concerns Over AI Tool Creating ‘Sexually Objectified’ Images”

A Microsoft software engineer has raised concerns about the company’s AI image generation tool, Copilot Designer, creating “sexually objectified” images. Shane Jones discovered a security vulnerability in OpenAI’s DALL-E image generator model, which is embedded in many of Microsoft’s AI tools, including Copilot Designer. Jones reported his findings to Microsoft and urged the company to remove Copilot Designer from public use until better safeguards could be implemented.

In a letter to the Federal Trade Commission (FTC), Jones stated that while Microsoft publicly markets Copilot Designer as a safe AI product for all users, including children, the company is aware of systemic issues where the tool generates harmful and offensive images. He emphasized that Copilot Designer lacks the necessary product warnings or disclosures to inform consumers about these risks.

Jones highlighted that Copilot Designer not only generates sexually objectified images of women but also creates harmful content in various other categories, such as political bias, underage drinking and drug use, misuse of corporate trademarks and copyrights, conspiracy theories, and religion.

The FTC confirmed receiving Jones’ letter but declined to comment further. This incident adds to the growing concerns surrounding AI tools generating harmful content. Just last week, Microsoft announced an investigation into reports that its Copilot chatbot was producing disturbing responses related to suicide. Similarly, Alphabet Inc.’s flagship AI product, Gemini, faced criticism for generating historically inaccurate scenes when prompted to create images of people.

Jones also wrote to Microsoft’s board’s Environmental, Social and Public Policy Committee, urging the company to voluntarily and transparently disclose known AI risks, especially when marketing products to children. He emphasized the need for transparency without waiting for government regulation.

Microsoft responded by stating its commitment to addressing employee concerns and appreciating their efforts in studying and testing technology to enhance safety. OpenAI did not provide a comment on the matter.

Jones has been expressing his concerns to Microsoft for the past three months. In January, he wrote to Democratic Senators Patty Murray and Maria Cantwell, as well as House Representative Adam Smith, requesting an investigation into the risks associated with AI image generation technologies and the responsible practices of companies building and marketing these products. The lawmakers have not yet responded to these requests for comment.

The incident raises important questions about the ethical use of AI tools and the responsibility of companies to ensure the safety and appropriateness of their products. As AI technology continues to advance, it becomes crucial for companies to implement robust safeguards and transparently disclose any known risks to protect consumers, especially when marketing these products to children.

video-container">

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.