Militaries can now legally use ChatGPT, changes in policy sighted
1 min. read
Published on
Read our disclosure page to find out how can you help MSPoweruser sustain the editorial team Read more
OpenAI recently revised its usage policy with an amendment concerning military applications. The previous policy explicitly prohibited using its technology for “military and warfare” purposes. However, the updated version removes this specific language, replacing it with broader restrictions against activities that could cause harm.
Changes to the Usage Policy:
- The earlier policy stated: “Activity that has high risk of physical harm, including […] weapons development and military and warfare,” is prohibited.
- The new policy retains a general ban on “using our service to harm yourself or others” and cites “develop or use weapons” as an example.
The company explains the change as part of an effort to make the policy “clearer” and “universally applicable,” especially considering the increasingly diverse user base, including individuals building their GPTs.
Militaries worldwide are actively exploring the potential of AI technology, including large language models like ChatGPT. For instance, the Pentagon evaluates how such tools could be used for various purposes.
The full extent of OpenAI’s interactions with the military and its plans for collaborations remain uncertain. The company hasn’t explicitly commented on whether its “harm” prohibition encompasses all military applications.
More here.
User forum
0 messages