News
Guidelines for the ethical use of AI in business
When a company or an organisation designs or uses artificial intelligence (AI), it has a duty to question how it can develop this technology in a responsible manner, without raising any ethical issues.
It falls to both AI designers and the leaders who will use this technology for their company’s activities to establish conscientious practices that follow a series of fundamental guidelines. So, what are those guidelines?
Ethical issues relating to the use of AI in business
Ethics is a set of well-reasoned, moral principles whose goal it is to define rules for life and action, provide recommendations and also set limits in order to orientate our existence and organise social life, with the goal of preserving our societies.
As a result, encouraging ethical practices in the field of AI requires consideration of what has moral value, what gives meaning to our actions and our life as a community, what the desirable or fair outcomes should be and what defines us as moral beings.
In this respect, it seems to me that the ethical creation and use of AI in business should adhere to the following imperatives:
- Transparency
- Explainability
- Consideration of the different stakeholders
Ethics and the law: Two prescriptive spheres with separate goals
It is important to underscore the fact that ethics and the law do not have the same end goals. They come from two prescriptive worlds with separate mechanisms.
While there are a number of regulatory safeguards – like the General Data Protection Regulation (GDPR), the AI Act2 which aims to regulate the use of AI by establishing a European code of conduct, as well as various certifications –, these rules of hard law are imposed upon private actors (companies) by a public actor (the legislature), supported by judges who will punish any conduct that does not conform to the rule of law. These measures are coercive, unlike ethical standards which are incentivising.
Read the full article