
Code generation tools can hinder security more than they help
New research by a group of Stanford-affiliated researchers has revealed that AI code generation tools like Github Copilot may pose a greater security risk than many users realize.
The study specifically looked at Codex, an OpenAI product of which Elon Musk is one of the co-founders.
Codex powers Microsoft’s GitHub Copilot, which is designed to make coding easier and more accessible by translating natural language into code and suggesting changes based on contextual evidence.
AI coding issues
The study’s lead co-author, Neil Perry, explains that “code generation systems do not currently replace human programmers.”
The study asked 47 programmers of varying skill levels to use Codex to solve security issues using Python, JavaScript and C programming languages. It found that participants who relied on Codex were more likely to write insecure code compared to controls .
Perry explained, “Programmers using [coding tools] to perform tasks outside their areas of expertise, and those who use them to speed up tasks they are already trained in should carefully review the results and the context in which they are used throughout the project.”
This isn’t the first time AI-powered coding tools have come under scrutiny. In fact, one of GitHub’s solutions to improve code quality at Copilot resulted in the Microsoft-owned company being sued for not assigning work to other developers. The result was a $9 billion lawsuit for 3.6 million individual Section 1202 violations.
For now, AI-powered code generation tools are best thought of as a helping hand that can speed up development, rather than a complete replacement, however, if development over the past few years is something to go by, they could soon replace traditional coding.
Through TechCrunch (opens in a new tab)