OpenAI president Greg Brockman said AI coding tools are taking on a larger share of software development work, according to Business Insider, which reported his remarks from a Sequoia Capital talk.
During the talk, Brockman said agentic coding tools had changed quickly over a short period. The tools went from writing about 20% of code to writing 80% of code over the course of December, he said.
Brockman said that the level of use changes the role of AI in software development. Rather than serving as a side tool, he said AI-generated code has become a central part of the work developers are doing.
Codex moves beyond engineers
OpenAI describes Codex as a cloud-based software engineering agent that can write features and fix bugs. It can also answer questions about a codebase and propose pull requests for review. Its documentation says Codex can read, edit, and run code in a cloud environment. The tool can also work on tasks in parallel.
Brockman encouraged founders to adopt AI tools, citing the pace of recent improvements. He pointed to Codex as one example of how OpenAI’s tools have expanded in scope.
Codex was initially aimed mainly at software engineers. Brockman said it has since moved toward supporting a wider set of users who work with computers, not only those writing software professionally. He also said OpenAI still requires a human to be responsible for merged code.
Human review remains in place
Brockman said OpenAI still requires human accountability for code that is merged. He said the company does not support blindly using AI-generated code, nor does it reject the tools outright.
Code review usually checks whether code meets project requirements and passes tests. It also covers architecture, security, and maintenance risks before code is merged.
Google has reported a similar split between AI generation and human approval. In an April 2026 post, CEO Sundar Pichai said 75% of all new code at Google is now AI-generated and approved by engineers, up from 50% last fall.
That figure has risen from earlier levels. In October 2024, Pichai said more than a quarter of new code at Google was generated by AI, then reviewed and accepted by engineers.
Google has described AI-generated code as subject to engineer approval, rather than automatic deployment.
Big Tech cites higher AI code use
Google has moved further into agent-based development workflows. Pichai said Google engineers are using autonomous digital task forces. He also said one complex code migration was completed by agents and engineers working together. According to Pichai, the work was done six times faster than was possible a year earlier with engineers alone.
Meta has also been applying AI coding targets within parts of its engineering organisation. Business Insider reported in March that Meta expected 65% of engineers in its creation organisation to use AI for more than 75% of their committed code.
The creation organisation is responsible for building and maintaining core creative experiences at Meta. The reported target applied to one organisation, rather than Meta’s full engineering workforce.
Anthropic CEO Dario Amodei has also projected higher levels of AI-generated code. Speaking at a conference last year, he said AI could write 90% of code within three to six months, and possibly almost all code within 12 months.
Amodei also wrote in a blog post earlier this year that AI is already writing much of the code at Anthropic. He said this has accelerated the company’s work on building its next generation of AI systems.
Security checks stay part of the process
Security findings add another check on AI-generated software. Veracode’s 2025 GenAI Code Security Report analysed code generated by more than 100 large language models. The tests covered Java, JavaScript, Python, and C#. Veracode said 45% of code samples failed security tests and introduced OWASP Top 10 vulnerabilities.
Veracode’s findings do not assess OpenAI’s internal code review process. The report focuses on security risks in AI-generated code samples across tested programming languages.
Stack Overflow’s 2025 Developer Survey found that 84% of respondents were using or planning to use AI tools in their development process. A later Stack Overflow post said trust in AI tools fell to 29% in 2025.
OpenAI said a human remains responsible for merged code. Google said AI-generated code is approved by engineers, while Veracode’s report points to security issues that can appear in generated code.
(Photo by Zulfugar Karimov)
See also: Ubuntu plans AI features with focus on local inference
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is part of TechEx and is co-located with other leading technology events, click here for more information.
AI News is powered by TechForge Media. Explore other upcoming enterprise technology events and webinars here.

