News

Artificial Intelligence: can it be controlled without preventing innovation?

Faculty and research

Published on February 02, 2024

Image
Artificial Intelligence: can it be controlled without preventing innovation?

On 4 December, while the draft “AI Act” was still being debated in Brussels, “AI Act Day” was setting the tone in Paris. In partnership with Datacraft, the Impact AI Think and Do Tank brought together a group of influential players in artificial intelligence (AI), with, as the chief “aspect of progress”, the task of laying down the foundations of an innovative and inspiring AI that is nonetheless ethical, sustainable and responsible.

​​AI is developing in leaps and bounds. That’s why we love it. But it is also why it feels threatening, and why we want to regulate it. None of the many artificial intelligence players brought together on 4 December at the Bpifrance Hub by Datacraft and the Impact AI Think And Do Tank was in any doubt: a responsible AI means a controlled AI. The question is how. How can AI – particularly generative AI – be regulated without inhibiting innovation? 

”There’s a time crunch”


That day, the subject was still the main topic for discussion between the European Parliament and the Member States of the European Union.  Since then, they have produced an initial regulation, a basic “AI Act” that could enable companies to develop useful, virtuous, safe solutions.  But AI is evolving faster than the law. While ChatGPT has revolutionised the AI landscape in less than a year, legislation, in the view of the most optimistic, will take at least two years to become effective. ”There’s a time crunch”, remarks the préfet Renaud Vedel, coordinator of the national strategy for artificial intelligence.

How can we legislate for something that is constantly changing? How can a regulation introduced today apply to a technology whose state of development in two years’ time is an unknown quantity? “Twenty years ago, new versions of software came out every five years. With AI, it’s every six months,” said Guillaume Leboucher, a member of the Docaposte executive committee and creator of the “AI for Schools” foundation. “There will always be time lags – it’s inherent to our work; that’s innovation. Is it that serious?” Maybe it isn’t that serious, but it raises questions. To start with, the idea of the right to employ. This brings the conflict between civil and common law to the fore again: if the right expressed by the law is lagging behind the actual situation, could jurisprudence make it possible to catch up? [...]

Read the full article here​

Interested in Artificial Intelligence? Discover our programmes : 
  • MSc Artificial Intelligence for Business Transformation
  • Mastère Spécialisé® Artificial Intelligence Project Manager



Last news