Context: Foundation Models and the AI Act
Foundation models are AI models designed for generality of output and are adaptable to a wide range of downstream tasks. An example is GPT-4, the model underpinning ChatGPT.
The EU institutions have now entered a critical phase in their negotiations regarding the treatment of such models, and a political consensus is anticipated by the end of the month. The subsequent open letter is provided to help shape the course of these discussions positively.
⎯⎯⎯
As European business leaders, startup founders, and investors, we wish to express our strong support for regulating foundation models1 under the EU AI Act.
We share the concerns expressed by tens of thousands of citizens, including hundreds of world-leading AI scientists,2 who argue that the development of such models may result in catastrophic harms, or even human extinction. These concerns have over the past six months been recognized by the European Commission President,3 the UN Secretary General,4 the US President,5 and the UK Prime Minister.6
Regulatory efforts are sometimes held back by concerns of how they will affect trade and industry. Since we represent one of the groups whose business interests are most dependent on foundation models, we hope our endorsement of regulating them carries weight.
We are by far not the first business representatives to endorse regulating such models. 60 European Digital SMEs already called for developers of foundation models to be regulated, arguing that the absence of such requirements would shift regulatory burdens from a few incumbents onto potentially hundreds of downstream companies.7,8
Our expertise is not in the nuances of regulatory design, so this letter is not a detailed recommendation. We merely request that there be requirements placed on foundation model developers that are proportionate to the high stakes at play. Such requirements should take into account that the EU AI Act rules won’t start applying until around two years from now, and must be capable of governing the increasingly powerful AI models that will be designed over many years to come.
Regulating only the companies which merely use foundation models could stifle innovation without reducing the most serious risks. The rules must affect the right actor, and they cannot just be administrative red tape and box ticking exercises. The stakes are too high to let this become an AI version of the GDPR cookie notice.
The AI Act should make AI safer and more trustworthy for everyone by design, which necessitates putting effective rules on foundation models.
Europe is now in a position to guide global practices on a question of historical importance. We should embrace this opportunity and lead with foresight and responsibility.
1Also called “general purpose AI” (GPAI) models in some parts of the AI Act, despite no significant technical difference in reality.
2"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war”.
3“Mitigating the risk of extinction from AI should be a global priority.”
4“Generative AI has enormous potential for good and evil at scale. Its creators themselves have warned that much bigger, potentially catastrophic and existential risks lie ahead. Without action to address these risks, we are derelict in our responsibilities to present and future generations."
5“Emerging technologies such as Artificial Intelligence hold both enormous potential and enormous peril. [...] Together with leaders around the world, the United States is working to strengthen rules and policies so AI technologies are safe before they are released to the public, to make sure we govern this technology, not the other way around, having it govern us. I am committed to working through [the UN] and other international bodies and directly with leaders around the world including our competitors to ensure we harness the power of artificial intelligence for good, while protecting our citizens from its most profound risks. [...] It’s gonna take all of us to get this right.”
6“People will be concerned by the reports that AI poses an existential risk like pandemics or nuclear wars - I want them to be reassured that the government is looking very carefully at this.”
7“We therefore propose that providers of cutting-edge general purpose AI systems, such as large language models, carry a fair share of the burden, and ensure their systems are made to conform to European requirements as much as possible before being placed on the market. By doing so, smaller players will have reduced compliance costs, and therefore lower barriers to entry in the market, which will boost European innovation.”
8An earlier version of this statement used the words "arguing that the absence of such requirements would merely shift the regulatory burden from a few incumbents onto thousands of startups". The current text has been updated to more accurately represent the phrasing in the referenced European Digital SME Alliance article.
Demonstrate your support for this open letter by adding your own signature to the list: