2.5 Million Users Quit ChatGPT Amid Pentagon Deal Controversy


OpenAI Faces Backlash Over Pentagon Collaboration


Artificial intelligence company OpenAI is facing a major public backlash following its deal with the US Department of Defense.


Millions of users are abandoning ChatGPT as part of the QuitGPT movement, citing concerns over privacy and military use.


Critics claim the deal could allow the US government access to citizens’ data for defense and security purposes.


The controversy has sparked global discussions about the ethics of AI use in military operations.


QuitGPT Movement Gains Momentum


Since the announcement of the Pentagon deal, a movement called QuitGPT has emerged in the US and other countries.


A dedicated website for the movement has been created to connect concerned users worldwide.


According to reports, over 2.5 million users have quit ChatGPT so far.


The movement reflects growing public unease with AI technology being leveraged for military and surveillance purposes.


ChatGPT App Uninstalls Surge by 300%


Last week saw a massive spike in app deletions in the US.


Data from market intelligence firm Sensor Tower revealed that on february 28th, the number of ChatGPT uninstalls surged by 295%, compared to a normal rate of around 9%.


This surge highlights widespread fear and anger over AI’s military applications.


Many users are switching to alternatives perceived as more ethical or less militarized.


Rival Cloud AI Becomes Popular


The US Department of Defense originally sought unrestricted access to Anthropic’s Cloud AI, which is used in military intelligence and covert operations.


Anthropic, however, imposed strict limits, refusing to use its AI for mass surveillance or lethal weapons.


Following this refusal, the Pentagon canceled its deal and blacklisted Anthropic, increasing interest in other AI platforms like Cloud AI that align with government requirements.


Users frustrated with OpenAI’s Pentagon involvement are considering Cloud AI as a safer alternative.


Ethical Concerns Driving Opposition


OpenAI faces criticism because its technology could potentially be used for military surveillance or autonomous weapons, raising serious ethical questions.


Anthropic CEO Dario Amodei emphasized that his company will not allow AI to be used for mass surveillance or lethal purposes, contrasting with OpenAI’s willingness to partner with the Pentagon.


The controversy underscores the growing tension between AI innovation, ethical responsibility, and government demands, shaping the future of AI adoption and public trust globally.


Disclaimer:


The information contained in this article is for general informational purposes only. While we strive to ensure accuracy, we make no warranties or representations of any kind, express or implied, about the completeness, accuracy, reliability, suitability, or availability of the content. Any reliance you place on the information is strictly at your own risk. The views, opinions, or claims expressed in this article are those of the author and do not necessarily reflect the official policy or position of any organization mentioned. We disclaim any liability for any loss or damage arising directly or indirectly from the use of this article.

Find out more: