Whistleblower Alerts US Officials and Microsoft Board to Risks of AI-Generated Imagery – Sky Bulletin

[ad_1]

Amidst increasing reliance on artificial intelligence for creative tasks, a Microsoft engineer has raised significant concerns about the potential misuse of Microsoft’s AI image-generation tool. The engineer has taken the bold step of contacting both U.S. legislators and the Microsoft board of directors to urge for immediate action on the matter.

Shane Jones, who holds the position of principal software engineering lead at Microsoft, spoke to The Associated Press, revealing his status as a whistleblower. Additionally, he engaged with staff from a U.S. Senate committee last month to present his apprehensions.

The Federal Trade Commission has acknowledged receipt of his communication but chose not to comment further. Conversely, Microsoft has expressed its support for employees raising such issues, encouraging Jones to use the company’s extensive internal systems for investigation and resolution.

Jones has devoted three months to investigating the safety implications of Microsoft’s Copilot Designer. This tool, a derivative of OpenAI’s DALL-E 3, can conjure new images based on textual descriptions. Nevertheless, Jones highlights a distinct risk: the software sometimes generates disturbing or inappropriate content even from seemingly innocuous prompts.

In a letter to the FTC Chair Lina Khan, he cited an example where the prompt ‘car accident’ resulted in some images featuring sexually objectified depictions of women. Furthermore, he identified additional issues such as violence, bias, illegal activities, and controversial content that could inadvertently surface in generated images.

The whistleblowing engineer has not only communicated his worries to OpenAI after Microsoft’s directive but also voiced his concerns via LinkedIn, which led to a confrontation with Microsoft’s legal team. Furthermore, Jones has reached out to the attorney general in Microsoft’s home state of Washington.

Though OpenAI’s DALL-E model is at the core of his concerns, he notes that using OpenAI’s ChatGPT for image generation may not present the same issues due to differing safety measures. He pointed out that safeguards integrated into ChatGPT may have already addressed many issues still present in Copilot Designer.

The rise of AI image-generators began in earnest in 2022, captivating the public and pressuring major tech companies to launch comparable services. Nonetheless, these groundbreaking technologies need robust safeguards to mitigate the risk of generating and spreading harmful “deepfake” content.

Google, for example, has suspended some functionalities of its AI, namely the Gemini chatbot, due to problematic outputs relating to race and ethnicity representations. Without proper controls, the door to falsely creating images depicting recognizable individuals is worryingly ajar.

The actions of Shane Jones underscore the pressing need for ethical and secure deployment of AI technologies — a challenge that becomes increasingly crucial as artificial intelligence applications grow in sophistication and prevalence. As AI image-generators capture public imagination, ensuring they do not become tools for misuse remains a critical priority for technology companies and regulators alike. It’s clear that vigilance and proactive measures are necessary to prevent AI from amplifying harmful content, and this situation may serve as an impetus for other companies to evaluate and reinforce their own AI safeguards.



[ad_2]