What steps are OpenAI taking to limit the use of AI tools for political campaigns?

Reading time icon 2 min. read

Readers help support MSPoweruser. When you make a purchase using links on our site, we may earn an affiliate commission. Tooltip Icon

Read the affiliate disclosure page to find out how can you help MSPoweruser effortlessly and without spending any money. Read more

OpenAI’s decision to introduce tools to combat disinformation ahead of elections in various countries is a response to the potential misuse of its technology, such as ChatGPT and DALL-E 3, for political campaigns. The company is concerned about the risk of AI-driven disinformation and misinformation undermining the democratic process, especially during election periods.

One key aspect of OpenAI‘s strategy is to restrict the use of its technology for political campaigning and lobbying until more is known about the potential impact of personalized persuasion.

The company acknowledges the need to understand the effectiveness of its tools and is actively working on solutions to ensure responsible use. This includes developing tools to provide reliable attribution to text generated by ChatGPT and enabling users to detect whether an image was created using DALL-E 3.

OpenAI plans to implement the Coalition for Content Provenance and Authenticity’s digital credentials to enhance methods for identifying and tracing digital content. By doing so, OpenAI aims to address concerns about the authenticity of content generated using AI models.

The company is also taking steps to ensure that its models, like DALL-E 3, have “guardrails” to prevent the generation of images of real people, including political candidates.

The development and deployment of tools to combat disinformation represent ongoing efforts to mitigate the negative impact of advanced AI technologies on society and democratic processes.

More here.

More about the topics: openAI