AI-powered coding is here and it is glorious
Tools like Copilot and CodeWhisperer are changing the game for how application code is developed
While a lot of developers are worried about the impact of AI on their jobs .. there is no stopping this train now
As per a recent survey by GitHub:
“92% of U.S.-based developers are already using AI coding tools both in and outside of work”
While I don't believe the end of programming is near. I do believe it is going through a major change due to AI.
But what about the impact of this new revolution on Cybersecurity?
AI-powered coding is still a blind spot for most CISOs and their security teams.
Ask yourself these key questions.
Are you aware if your development team is using AI-powered coding?
How much of your code base is now AI-driven?
How do you train an AI to generate secure code?
How much access do these tools have within your environment?
Let us take a look!
What is AI-driven coding?
AI-driven code can thought of as coding assistants that integrate into the developer’s IDE and give them the following:
Automatic code generation based on developer prompts
Ability to build enter applications with just a few requirements
Auto-correction resulting in more “cleaner” code
Easier on-boarding of new languages and APIs with a shorting learning curve
The boost to productivity is massive as AI tools get integrated more and more into the code pipeline.
These tools also make coding accessible to a broader audience allowing beginners to generate code and roles like product managers getting more hands-on.
So what is the problem?
The Cybersecurity question
As with all good things that bring about a change in technology .. new risks are introduced
I have written extensively on the new types of risks that AI applications will introduce and AI-powered development is no exception.
As AI coding becomes more heavily adopted into standard processes, here are a few new risks that can get introduced:
AI coding tools can be vulnerable to poisoning attacks that result in an attacker corrupting the AI model and tricking it into generating malicious code.
AI coding tools might require access to a company’s source code repositories and thus be vulnerable to supply chain attacks. Attackers can leverage vulnerabilities within these tools to gain access to a company’s source code for further attacks.
Hallucinations are a common problem with AI models and these tools can actually create unreliable or incorrect code if not checked properly. Blindly trusting AI output is a major issue which will only get worse
AI coding tools can possibly generate cleaner code than us humans but they might not understand the intricacies of how business applications work. This could lead to them being blind to business logic flaws in the code that can be exploited by attackers.
AI powered development is a game changer but CTOs and CIOs must be aware of the risks involved before handing over the keys to the kingdom to them.
Use them as assistants and not replacements for your development team and you can get the full advantages of AI with minimal risks