Artificial intelligence (AI) is currently considered as a key technology whose effective and efficient utilization significantly determines the innovation and competitiveness of businesses. New AI applications are constantly making their way into everyday business operations. It’s time to inform our clients about the current state of the data protection debate.
AI applications: Curse or blessing?
One example of a currently highly debated AI application is “ChatGPT”. Since its official launch in late 2022, the service has already reached 1 million users in the first 5 days. The goal of the provider OpenAI was to explore artificial intelligence and further develop it for the benefit of humanity.
ChatGPT, like its competitors “OPT” from Meta or “Bart” from Google, enables simplified creation, editing and translation of texts and graphics of various types. Users can engage in a dialogue with the language- and text-based chatbot on almost any imaginable topic, and results is available within seconds. So far, so good…
However, every interaction with the chatbot also means that every conversation and every user input becomes part of the provider’s training data and can be stored and furher processed accordingly. When considering the creation of fake news, phishing emails or malware, AI applications do pose certain risks.
The development in this field is rapid. Phishing attacks are becoming increasingly personalised and precise. Especially in business use cases, businesses should therefore keep an eye on issues such as identity theft and fraud and keep up to date about secure and data protection compliant use.
In the following, we will shed light on some questions that are currently of particular importance in terms of data protection law:
How does European data protection law, for example, address the fact that the creation of AI models typically involces using third-party texts and images, which are personal data, without the prior consent of the individuals concerned? Are users of AI services aware that their input commands (“prompts”) are further used by the providers for the development of their software, as often stated in the terms of services? How can AI services be used in a data protection-compliant way? What is the status of the injunction of the Italian and German data protection authorities against ChatGPT?
In general, the processing of personal data is subject to the following principles:
Providers of applications that process personal data are obliged under GDPR to comply with the applicable requirements and to inform the data subjects about the purpose of the data processing. Furthermore, sensitive data such as health data, information about sexual identity, and ideaology are excluded from data processing.
Data plays a crucial role in AI applications, because sustainable learning can only occur from large amounts of provided data sets. Thus, all data – including personal data – disclosed to the chatbot becomes part of the text corpus used for training the system.In the case of ChatGPT and other services, explicit consent about the processing of data according to the GDPR has not been obtained and, users are not adequately informed about it.
How responsible EU data protection authorities are responding:
As an US company, OpenAI does not have a subsidiary in the EU. Due to the processing of personal data of European users, the European data protection supervisory authorities are still competent and have already initiated proceedings against OpenAI in some cases.
For example, the Italian data protection supervisory authority GPDP has already issued a prohibition order against OpenAI regarding to ChatGPT. A legal basis for the processing of personal data by OpenAI is not recognisable. In particular, consent could be considered, but OpenAI has not yet obtained such consent, nor has it informed users or non-users that their data is being used for training n the algorithm without their knowledge.
German authorities share a similar view:
Both the “Hessische Beauftragte für Datenschutz und Informationsfreiheit” and the “Landesbeauftragte für Datenschutz und Informationsfreiheit Baden Würtemberg” requested a statement from OpenAI back in April on how ChatGPT is organized in terms of data protection law. A response to the comprehensive questions is expected in mid-June, and the evaluation will follow after receiving appropriate feedback. The goal is to achieve a unified approach at the European level. However, the authorities expect US AI providers to guarantee the same level of data protection as European businesses.
However, there are currently no plans to ban ChatGPT, but OpenAI’s response is eagerly awaited. In parallel, the EU is already developing a framewirk for the use of AI applications in Europe that will apply to all member states.
To discuss the legal limitations for Artificial Intelligence services in the EU with you, ePrivacy is offering an English-language webinar on Thursday, 13 July 2023 at 2.00 pm (CEST) in cooperation with our partner law firm UNVERZAGT Rechtsanwälte:
“Legal limitations for Artificial Intelligence services in the EU – the current and future regulatory framework”
Speaker: Dr Lukas Mezger, UNVERZAGT Rechtsanwälte
We will provide more information about this event shortly.