Close this search box.
Close this search box.

Developers' guide

Optimising Security in Generative AI

In this guide, we discuss security issues related to GenAI, in particular Large Language Model-based tools (LLMs). We present the main vulnerabilities to consider while developing AI-based applications and identify corresponding mitigations. Our discussion is based on OWASP top 10 for LLM.

The guide focuses on the following security topics:

  • Prompt injection
  • Training data poisoning
  • Insecure output handling
  • Insecure plugin design
  • Excessive agency

The guide has been produced as part of the Sb3D project funded by the Danish Industry Foundation (Industriens Fond).

  • Published 2024

Formular indsendt!