OpenAI warns new models pose ‘high’ cybersecurity risk

Dec ⁠10 (Reuters) – OpenAI on Wednesday warned that ⁠its upcoming artificial intelligence models could pose a “high” cybersecurity risk, ⁠as their capabilities advance rapidly.

The AI models might either develop working ​zero-day remote exploits against well-defended systems or assist ‍with complex enterprise or industrial intrusion operations aimed at real-world effects, the ChatGPT maker said in a blog post.

As capabilities advance, ​OpenAI said it is “investing in strengthening models for defensive cybersecurity tasks and creating tools that enable defenders to more easily perform ​workflows such as auditing code and patching vulnerabilities”.

To counter ⁠cybersecurity risks, OpenAI said it is relying on ‌a mix of access controls, infrastructure hardening, egress controls and monitoring.

The Microsoft-backed ⁠company said it will soon ​introduce a program to explore providing qualifying users ‌and customers working on cyberdefense with tiered access to enhanced capabilities.

OpenAI will also ‍be establishing an advisory group, called the Frontier Risk Council, which will bring experienced cyber defenders and security practitioners into close collaboration with its teams.

The group will begin with a focus on cybersecurity, and expand into other frontier capability domains in the future.

(Reporting by Juby Babu ⁠in Mexico City; Editing ‌by Krishna Chandra ⁠Eluri)

Live Market Pulse

The charting technology is provided by TradingView. Learn how to use theTradingView Stock Screener.

Recent Posts

Categories