A paper by researchers at Stanford University has found that coders who employed AI assistants such as GitHub Copilot and Facebook InCoder actually ended up writing less secure code.
What’s more, such tools also lull developers into a false sense of security, with many believing that they produce better code using the help.
Nearly 50 subjects, each with varying levels of expertise, were given five coding tasks, using various languages, with some aided by an AI tool, and others without any help at all.
The authors of the paper – Neil Perry, Megha Srivastava, Deepak Kumar, and Dan Boneh – stated that there were “particularly significant results for string encryption and SQL injection”.
They also referenced previous research which found that around 40% of programs created with assistance from GitHub Copilot contained vulnerable code, although a follow-up study found that coders using Large Language Models (LLM), such as OpenAI’s code-cushman-001 codex – on which GitHub Copilot is based – only resulted in 10% more critical security bugs.
However, the Stanford researchers explained that their own study looked at OpenAI’s codex-davinci-002 model, a more recent model than cushman, which is also used by GitHub Copilot.
One of the five tasks involved writing a code in Python, and here code was more likely to be erroneous and insecure when using an AI helper. What’s more, they were also “significantly more likely to use trivial ciphers, such as substitution ciphers (p
The authors hope that their study leads to further improvements in AI rather than dismissing the technology altogether, due to the potential productivity improvements such tools can offer. They just maintain that they should be used cautiously since they can mislead programmers into thinking they are infallible.
They also think AI assistants can encourage more people to get involved with coding regardless of their experience, who may also be put off by the air of gatekeeping around the discipline.
- Check out the best text editors for coding
Via The Register
Go to Source