OpenAI — the artificial intelligence company behind ChatGPT and other popular AI tools — is reportedly working on a new social network designed to be used only by verified human users, in an effort to tackle widespread bot and spam problems that plague existing platforms.

According to multiple reports, including from Forbes and other tech news outlets, the project is still in very early development with a small team working on it, and no official launch date has been announced.

Fighting Bots With Identity Verification

The main idea behind this possible new platform is to ensure that every account belongs to a real person — eliminating fake, automated accounts that generate spam, misinformation, or manipulative content.

To achieve this, OpenAI may require strict identity verification using biometric methods. This could include:

  • Iris scanning technology, similar to the World Orb system developed by Tools for Humanity (a company chaired by OpenAI CEO sam Altman).
  • Apple’s Face ID or other facial recognition systems as part of the “proof of personhood.”

If implemented, this kind of biometric verification would go beyond traditional measures — like email or phone checks — by attempting to confirm users are human at a fundamental physical level.

Why This Matters

Social media platforms like X (formerly Twitter), Instagram, and TikTok have all struggled with AI‑generated bots, fake accounts, and automated activity that distorts engagement and spreads harmful content. OpenAI’s proposed approach aims to create a trusted, bot‑free space where interactions are genuinely human.

Backers say this could improve user experience and authenticity compared with current platforms, where algorithmic signals and phone‑based verification often fail to stop sophisticated automated accounts.

Privacy and Security Concerns

However, the use of biometric data — especially things like iris scans — raises major privacy and security questions.

  • Biometric identifiers like iris or face scans are permanent and cannot be changed like a password or email.
  • Critics warn that storing or managing such sensitive data centralized by a private company could pose risks if breached or misused.

Analysts also note that requiring biometric verification could limit adoption, as many users may resist giving up personal data to a social platform.

Current Status and Outlook

As of now, OpenAI has not officially confirmed the project or provided details on a launch timeline. Reports indicate that the idea is still being tested internally, and its scope could change significantly before anything becomes public.

Nevertheless, the concept reflects a broader push by tech companies to explore new ways to prove humanity online as AI‑generated content becomes more sophisticated and harder to distinguish from real human activity.

 

Disclaimer:

The views and opinions expressed in this article are those of the author and do not necessarily reflect the official policy or position of any agency, organization, employer, or company. All information provided is for general informational purposes only. While every effort has been made to ensure accuracy, we make no representations or warranties of any kind, express or implied, about the completeness, reliability, or suitability of the information contained herein. Readers are advised to verify facts and seek professional advice where necessary. Any reliance placed on such information is strictly at the reader’s own risk.

Find out more:

AI