News

Artificial Intelligence and Algorithms: Ethics and Fair Cooperation between AI and Human Intelligence

Published on October 10, 2023

Image
Artificial Intelligence and Algorithms: Ethics and Fair Cooperation between AI and Human Intelligence

With the recent buzz surrounding generative artificial intelligence (such as ChatGPT, Midjourney, etc.), the questions that arise are whether algorithms are ethical, depending on how they are trained and reinforced, the data sets they use, their possible biases and whether or not they are inclusive. It is also important to question the role of humans.

​Does big data, characterised primarily by its volume, speed and variety, requires the systematic use of AI, or is human processing sufficient and/or preferable?

A brief history of the development of AI

AI first appeared in the early 1950s with the first associated computer programming languages, such as LISP (designed for intelligent LISt Processing), and the famous Turing Test, which determines whether a machine has the ability to imitate a human conversation and thus pass as a human. After some over-inflated expectations as to its possible uses, AI suffered waves of backlash or disappointment, two so-called “AI winters”, and was then applied in fields that were narrower and more specialised than those initially envisaged or imagined.

More recently, neural networks made their appearance. Their capacity for self-learning and their ability to establish probabilistic correlations between data elements, combined with the computing power of machines, which according to Moore’s Law continues to double every 18 months, have significantly accelerated the rise of AI and its applications, with the corollary that the answers generated are both plausible and astonishing. Large language models (LLMs), which make it possible to implement this type of AI, are models trained on a large corpus of texts.

​It is also legitimate to wonder whether we might experience a third AI winter in the medium term, just as the buzz and engagement generated by the metaverse has started to die down. However, where generative AI is concerned, the time horizon has accelerated. As an example, since January 2023, Sciences Po University in France has been requiring its students to “explicitly mention” any passage written by ChatGPT ; prohibition measures have also been taken in Italy


This is an extract from an article on the SKEMA Publika website : read the full article here



Last news