CAIML #33

CAIML #33 happened on November 19, 2024, at Oppenhoff.

Agenda

18h30 Open Doors

19h00 Welcome & Intro

19h15 Valentino Halim (Attorney at Oppenhoff) & Axel Grätz (Attorney at Oppenhoff): The AI Act from a developer’s perspective

The AI Act, as the first comprehensive regulation of artificial intelligence, came into force on 1 August 2024. Not all systems and applications that computer scientists conventionally consider to be AI also falls under the scope of the regulation. Valentino Halim and Dr. Axel Grätz will explain and discuss the criteria and issues in distinguishing between AI and ordinary software. In addition, they will give you an overview of your obligations when developing and deploying or providing AI under the AI Act.

Valentino Halim is an attorney at Oppenhoff specializing in digital and IT law, including data protection, AI regulation, IT contracts and cyber security. He advises companies in administrative and court proceedings, e.g. in the event of data protection violations. In addition, Valentino Halim has expertise in the field of digital business models, as well as in legal issues relating emerging technologies. He is a member of several professional organizations relating to IT law and has gained international experience in Chicago (USA), among other places.

Axel Grätz is an attorney at Oppenhoff and advises national and international companies on all matters of IT and data protection law. He specialises in advising on the implementation and operation of AI-systems. He studied law in Bonn and Cologne and completed his doctorate under the current Federal Data Protection Commissioner on the topic of ‘Artificial Intelligence in Copyright Law’. He was awarded the TELEKOM Prize for this work. Before joining Oppenhoff, he worked in several law firms in the field of IT and data protection law.

19h50 Gerhard Paass (Senior Researcher at Fraunhofer IAIS): State Space Models as an Alternative to the Transformer?

By using the transformer, generative language models are for the first time able to generate plausible texts of high quality. However, the computational and memory requirements increase quadratically with the length of the input. State-space models offer a way to account for longer input in language models. The talk gives a short introduction to these models and discusses some benchmark results. It is shown that combinations of transformers and state-space models offer a number of advantages. Some applications of state space models are also mentioned.

20h20 Networking with food and drinks provided by Oppenhoff

Updated: