OpenAI says it has finally taken the lead in the competitive field of AI-powered coding. Its latest model, GPT-5.3 Codex, outperforms rival systems on coding benchmarks and has reported results that beat earlier versions from both OpenAI and Anthropic. This could give OpenAI the advantage it has been seeking in a field that may change how software is developed.  

However, OpenAI is launching the model with strict controls and is delaying full developer access. The reason is that the features that make GPT-5.3 Codex so good at coding also bring serious cybersecurity risks. As OpenAI pushes to build the best coding model, it now faces the challenges that come with releasing such powerful technology.  

Paid ChatGPT users can now access GPT-5.3 Codex for everyday software development tasks, such as writing, debugging, and testing code, via OpenAI’s Codex tools and the ChatGPT interface.  

For now, OpenAI is not granting unrestricted access to high-risk cybersecurity users and is holding back on full API access that would enable the model to be widely automated. Additional safeguards, including a new Trusted Access Program for approved security professionals, protect these sensitive users. This shows that OpenAI believes the model now poses greater cybersecurity risks.  

OpenAI is also offering $10 million in API credits to developers who want to use its models to build tools that strengthen cyber defenses.  

In a blog post about the model’s release, OpenAI said it does not have conclusive evidence that the new model can fully automate cyber attacks. Still, the company is being cautious and using its most thorough cybersecurity measures to date. These include safety training, automated monitoring, trusted access for advanced features, and enforcement pipelines with threat intelligence.  

OpenAI CEO Sam Hortman addressed these concerns on X, saying GPT-5.3 Codex is our first model to score high on cybersecurity in our preparedness framework, the company’s internal risk rating system for new models. This means OpenAI believes this is the first model that could realistically cause cyber-harm, especially if automated or used widely.  

According to OpenAI’s preparedness framework, the company will not release any model rated as high-risk in areas such as cybersecurity unless it first puts safeguards in place. The framework lists a Trusted Access Program as one possible safeguard.  

Codex Spark is the initial step towards a codex that offers two main modes:  

  1. One for longer-term reasoning and execution  
  1. Another, for instance, is collaboration and quick iteration.  

As the product develops, these modes will merge. Codex will let you stay closely involved in an interactive loop. While it handles longer tasks in the background or spreads them across multiple models for greater speed and coverage, you won’t have to pick just one mode from the start.  

As models get better, the speed of interaction can slow things down. Faster inference helps close the gap, making Codex easier to use and opening up more possibilities for anyone who wants to turn an idea into working software.

Source: OpenAI’s new model leaps ahead in coding capabilities—but raises unprecedented cybersecurity risks 

Amazon

Leave a Reply

Your email address will not be published. Required fields are marked *