Executive Summary
Generative AI models are capable of performing a wide range of tasks that traditionally require creativity and human understanding. They learn patterns from existing data during training and can subsequently generate new content such as texts, images, and music that follow these patterns. Due to their versatility and generally high-quality results, they, on the one hand, represent an opportunity for digitalization. On the other hand, the use of generative AI models introduces novel IT security risks that need to be considered for a comprehensive analysis of the threat landscape in relation to IT security.
In response to this risk potential, companies or authorities using them should conduct an individual risk
analysis before integrating generative AI into their workflows. The same applies to developers and
operators, as many risks in the context of generative AI have to be taken into account at the time of
development or can only be influenced by the operating company. Based on this, existing security measures can be adjusted, and additional measures can be taken.
Table of Contents
1 Introduction
1.1 Target Audience and Aim of this Document
1.2 Groups of Relevant Persons
1.3 Disclaimer
2 Large Language Models
2.1 What are Large Language Models?
2.2 Opportunities of LLMs
2.2.1 General Opportunities
2.2.2 Opportunities for IT Security
2.3 Risks of LLMs
2.3.1 Proper Use
2.3.2 Misuse
2.3.3 Attacks
2.4 Countermeasures
2.5 Classification and Reference of Risks and Countermeasures
3 Summary
Bibliography
Version Date Editor Description
1.0 15 May 2023 TK 24 First Release
1.1 4 April 2024 TK 24
• The document was restructured for the sake of clarity, better comprehensibility, and to facilitate the future intended expansion.
• The countermeasures for addressing the risks in the context of LLMs were consolidated into a single chapter, as some of the countermeasures counteract several risks, thus avoiding multiple mentions. A cross-reference table illustrates which countermeasure counteracts which risk.
• The information on LLMs was extensively updated and supplemented based on current publications.
• Graphics were inserted to establish an association between risks or countermeasures and the time at which they can occur or must be taken.