"So we've been talking about AI..." - the series, chapter 4 - Infosec and Risk Management
Remember the zeroth law of robotics...
This is one of the areas that I think may be more negatively impacted initially. Automated attacks, extended information gathering capabilities implemented by bad actors, data leaks due to lack of maturity (think RAGs and incidents by open data storages around the internet).
Attackers
If anything, AI tools gave scale, parallelism and speed to test and iterate. The quality of bug reports generated by AI is making vendors have a hard time threading between real reports and opportunistic reports. Agents and browser automation, which we could roughly classify as “bots” have been consistently being on top of access logs of most of ecommerces.
Incorporation of merchants into copilots and models as OpenAI and Shopify present another challenge as hallucination has been used to manipulate users into clicking into unsafe content.
The same effect happens on coding assistants. Research shows that bad actors are learning the most common hallucinations on suggested package names and creating them on javascript and other language repositories to introduce malware into the software supply chain. Code and text generation are based on training, most common patterns at scale will be adapted and replicated.
Prompt injection
There are honeypots being deployed that when attacked try to inject a prompt and reverse the data flow, in some ways making the attacker vulnerable to new instructions. Interactions with products based directly or indirectly on LLMs are subject to that.
A simple operation as to fetch a page, run a query or API integration, parse an email or image has the capability to do the same to your LLM or prompt based AI interface. That's a new instance of SQL injections and of the age old “eval” risk that some languages present.
Compliance management
The AI net positives for compliance, sales and customer support teams in my opinion are related to RFP and questionnaire automation.
In a world where vendor assessment is fluid, security teams covers a broad spectrum from protecting the company to supporting sales and customers. Each new sale require a new questionnaire, a new RFP (Requests For Proposal) to be answered, along with a lot of evidences.
Platforms that create a score based on security heuristics such as SecurityScoreCard aggregate information and evidences to help manage questionnaires and are a huge time saver. Their output is structured information that along with Generative AI can reduce risk and waiting time.
Compliance management as Vanta helps teams that most of the time are busy putting in place and managing information security management systems as ISO27001, SOC2 and so on.
They are in a critical path to revenue and probably one of the most disputed subjects on weekly meetings between sales and product teams.
Near future
Scaling fraud Management beyond analytics and query combination by detecting behaviors and automating finance workflows with proper observability.
The next wave of biometric protection as AI models are getting more realistic and it is easy to simulate real people for scamming and phishing purposes.
I have released a new book: “Go for Gophers”. It is a progressive, hands on book for teams and engineers that are adopting Go and look for an idiomatic path forward. Quick, practical and loaded with useful examples. Check it out.
If you are a CTO, Tech Lead or Product Leader check The CTO Field Guide. It is a book to help product engineering leaders of all levels. If you need personalized help, check the mentoring program based on the book at https://ctofieldguide.com/mentoring.html.
