OpenAI unveils Media Manager, lets creators control how their work is used to train AI models

Reading time icon 2 min. read

Readers help support MSpoweruser. We may get a commission if you buy through our links. Tooltip Icon

Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more

Key notes

  • OpenAI prioritizes responsible AI development with benefits for creators, users and publishers.
  • Their “Media Manager” tool (launching in 2025) lets creators control how their work is used in AI training.
  • Partnerships with content creators (improved source linking in ChatGPT) and diverse datasets aim for a well-rounded AI.

OpenAI has shared details regarding its approach to responsible AI development. Their focus centers on making sure that the benefits are for various stakeholders, including users, creators, and publishers. This comes after OpenAI announced that it is joining the Steering Committee of C2PA.

A main aspect of this approach is respecting the rights of content creators. OpenAI says that it recognizes the potential for misuse of AI and is developing a tool named “Media Manager” (targeted for launch in 2025) that will let creators control how their work is used to train AI models. This tool can set a new standard within the AI industry.

Media Manager:

Purpose:  It’s a tool designed to give creators and content owners control over how their works are used in training AI systems.

Functionality: Creators can likely specify how they want their work included (or excluded) from datasets used to train AI models. This could involve opting out entirely, allowing use with attribution, or setting other parameters.

OpenAI is also getting into partnerships with content creators. Recent improvements in ChatGPT includes enhanced source linking. This gives users with better context and facilitates direct connections with publishers. Collaborations with prominent news organizations like the Financial Times aim to enrich the user experience on news-related topics within ChatGPT.

In terms of model development, OpenAI said to be going for inclusivity. To get this, they train their models on different datasets with various languages, cultures, and subject areas. This helps mitigate cultural bias and makes sure that AI can serve a broader global audience.

Transparency is another key component of OpenAI’s approach. They are said to openly communicate their data sources and refrain from using personal data during training. Users have the option to control whether their data contributes to the development of future AI models.

Overall, OpenAI’s vision emphasizes responsible AI development. Their initiatives empower creators, prioritize user privacy, and promote inclusivity. Through collaboration and transparency, OpenAI strives to make a positive impact on all participants within the ever-evolving field of AI.

More here.

Leave a Reply

Your email address will not be published. Required fields are marked *