Generative AI – opportunities and risks according to the BSI

The German Federal Office for Information Security (BSI) updated its publication on generative AI models at the beginning of the year. Such AI models can generate new content such as texts, images, audio and video after learning patterns from existing data during training. These versatile models, which can potentially produce high-quality results, represent an opportunity for digitalization, but their use also entails new IT security risks.
 
The revised version of the BSI paper is aimed at companies and public authorities that want to integrate applications such as chatbots or image and video generators or other generative AI models into their work processes and provides particular support for risk analysis. To this end, various risks and countermeasures are described and evaluated in detail. In the BSI’s view, particular attention should be paid to raising user awareness, carefully selecting training data, conducting extensive testing, and handling sensitive information responsibly. Transparency, verification of inputs and outputs, and protection against input manipulation are further important aspects when using generative AI. To fully exploit the potential of this technology while minimizing risks, practical expertise should be built up and realistic assessments should be carried out using proof of concepts.
 
Due to the potential dangers associated with generative AI, companies should carry out an individual risk analysis before integrating it into their own processes. From the BSI’s point of view, this applies not only to developers but also to those who operate generative AI, since some risks can only be addressed at this level. Based on the risk analysis, existing security measures should be adapted and, if necessary, additional measures should be taken. With a clear awareness of security and targeted countermeasures, companies can safely and responsibly exploit the potential of generative AI.
 
(Dr. Marian Klingebiel, UNVERZAGT Rechtsanwälte)