Microsoft Engineer Says Copilot Needs Better Safeguards Against Violent Content and Copyright Violations

Microsoft Copilot

After Microsoft recently had to implement new safeguards to its Designer AI tool to block the creation of AI-generated porn deepfakes, a Microsoft engineer is sounding the alarm about the various unsafe aspects of the company’s AI technology. CNBC is reporting today that Shane Jones, an AI engineer at Microsoft has sent letters to the FTC and the Microsoft board to point out how Microsoft’s AI tools can be used to create violent images and ignore copyright laws.

Jones, who has been testing the limits of Microsoft’s Copilot Designer tool in his free time, discovered that it was very easy to create sexualized images of women, pictures of teenagers with guns, and other sexual and violent content Microsoft probably doesn’t want its AI technology to be associated with. And even though Microsoft claims that it has built guardrails into its AI products to prevent copyright infringement, Jones said that he also managed to generate controversial images featuring popular Disney characters such as Mickey Mouse and Snow White.

Windows Intelligence In Your Inbox

Sign up for our new free newsletter to get three time-saving tips each Friday — and get free copies of Paul Thurrott's Windows 11 and Windows 10 Field Guides (normally $9.99) as a special welcome gift!

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

The Microsoft engineer told CNBC that he warned Microsoft about his findings in December, but the company refused to take down its Copilot Designer tool. In January, Jones shared his concerns about Microsoft’s AI in a post on LinkedIn, but Microsoft’s legal team asked him to take it down, and he obeyed. During the same month, Jones also sent a letter to US senators to warn them about his findings, and he followed up this week with letters to US FTC chair Lina Khan and Microsoft’s board of directors.

“Over the last three months, I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place,” Jones wrote in his letter to Khan. “Again, they have failed to implement these changes and continue to market the product to ‘Anyone. Anywhere. Any Device,’” the Microsoft engineer added.

Speaking with CNBC, a Microsoft spokesperson said that the company is “committed to addressing any and all concerns employees have in accordance with our company policies, and appreciate employee efforts in studying and testing our latest technology to further enhance its safety.” The statement continues with Microsoft saying that it has “established robust internal reporting channels to properly investigate and remediate any issues, which we encourage employees to utilize so we can appropriately validate and test their concerns.”

Microsoft has made it clear since last year that it’s going all-in on AI, and the company’s drive to launch more “Copilot” products doesn’t seem to be slowing down. However, after Google recently paused the image generation capabilities of its Gemini AI to investigate why inaccurate or offensive images could be created, Microsoft could probably be well inspired to also hit the pause button.

Tagged with

Share post

Please check our Community Guidelines before commenting

Windows Intelligence In Your Inbox

Sign up for our new free newsletter to get three time-saving tips each Friday

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Thurrott © 2024 Thurrott LLC