Skip to Main Content
It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.
The Library guides are not exhaustive. They contain selected resources from the online Eureka catalogue of the Council Libraries but also titles beyond our collections, including open-access publications.
The Library guides do not necessarily represent the positions, policies, or opinions of the Council of the European Union or the European Council.
Council of the EU
The Charter of Fundamental Rights in the context of artificial intelligence and digital change - Presidency conclusions by
Publication Date: 21 October 2020
The Presidency of the Council issued presidency conclusions on the charter of fundamental rights in the context of artificial intelligence and digital change. These conclusions are designed to anchor the EU's fundamental rights and values in the age of digitalisation, foster the EU's digital sovereignty and actively contribute to the global debate on the use of artificial intelligence with a view to shaping the international framework. The Presidency conclusions focus on a fundamental-rights based approach to artificial intelligence and provide guidance on dignity, freedoms, equality, solidarity, citizens' rights and justice.
Humans and societies in the age of artificial intelligence by
Publication Date: 2021
Artificial Intelligence (AI) will radically change our lives and transform our societies. This shift, which has already started, will most probably be the deepest and the fastest humanity has ever experienced. While most of the ongoing discussions on AI limit themselves to the short and medium-term effects, this short and comprehensive report tries to go beyond the most immediate challenges and to explore also some of the longer-term impacts that AI may have on humans and societies. It summarizes the key issues in 10 takeaways and suggests a list of possible actions to be taken by policymakers.
Getting the future right: artificial intelligence and fundamental rights : report by
Publication Date: 2020
Artificial intelligence (AI) already plays a role in deciding what unemployment benefits someone gets, where a burglary is likely to take place, whether someone is at risk of cancer, or who sees that catchy advertisement for low mortgage rates. Its use keeps growing, presenting seemingly endless possibilities. But we need to make sure to fully uphold fundamental rights standards when using AI. This report presents concrete examples of how companies and public administrations in the EU are using, or trying to use, AI. It focuses on four core areas - social benefits, predictive policing, health services and targeted advertising. The report discusses the potential implications for fundamental rights and analyses how such rights are taken into account when using or developing AI applications. In so doing, it aims to help ensure that the future EU regulatory framework for AI is firmly grounded in respect for human and fundamental rights.
The ethics of artificial intelligence : issues and initiatives by
Publication Date: 2020
This study deals with the ethical implications and moral questions that arise from the development and implementation of artificial intelligence (AI) technologies. It also reviews the guidelines and frameworks which countries and regions around the world have created to address these. It presents a comparison between the current main frameworks and the main ethical issues, and highlights gaps around the mechanisms of fair benefit-sharing; assignment of responsibility; exploitation of workers; energy demands in the context of environmental and climate changes; and more complex and less certain implications of AI, such as those regarding human relationships.
The Assessment List for Trustworthy Artificial Intelligence (ALTAI) for self assessment by
Publication Date: 2020
n 2019 the High-Level Expert Group on Artificial Intelligence (AI HLEG), set up by the European Commission, published the Ethics Guidelines for Trustworthy Artificial Intelligence. The third chapter of those Guidelines contained an Assessment List to help assess whether the AI system that is being developed, deployed, procured or used, adheres to the seven requirements of Trustworthy Artificial Intelligence (AI), as specified in our Ethics Guidelines for Trustworthy AI: 1. Human Agency and Oversight; 2. Technical Robustness and Safety; 3. Privacy and Data Governance; 4. Transparency; 5. Diversity, Non-discrimination and Fairness; 6. Societal and Environmental Well-being; 7. Accountability.
Sectoral considerations on the policy and investment recommendations for trustworthy artificial intelligence by
Publication Date: 2020
The European Commission has put at the core of its strategy for artificial intelligence (AI) the need for a human-centric approach, rooted in fundamental rights and European Union (EU) law. This approach was directly shaped by the work of the High-Level Expert Group on Artificial Intelligence (AI HLEG), encompassing both the Ethics Guidelines for Trustworthy AI, and the Policy and Investment Recommendations for Trustworthy AI. The resulting focus is on Trustworthy AI, defined as AI that is legally compliant, ethically adherent, and socio-technically robust. The AI HLEG put forward this vision already in its Ethics Guidelines on Trustworthy AI, published in April 2019 and confirmed it in its Policy and Investment Recommendations for Trustworthy AI adopted in June 2019. Since then, the EU has further consolidated its position as international leader in the definition of responsible uses of AI, in particular with the publication of the White Paper on Artificial Intelligence and the Communication on A European Strategy for Data in February 2020.
Ethics guidelines for trustworthy AI by
Publication Date: 2019
The aim of the Guidelines is to promote Trustworthy AI. Trustworthy AI has three components, which should be met throughout the system's entire life cycle: (1) it should be lawful, complying with all applicable laws and regulations (2) it should be ethical, ensuring adherence to ethical principles and values and (3) it should be robust, both from a technical and social perspective since, even with good intentions, AI systems can cause unintentional harm. Each component in itself is necessary but not sufficient for the achievement of Trustworthy AI. Ideally, all three components work in harmony and overlap in their operation. If, in practice, tensions arise between these components, society should endeavour to align them.
Artificial Intelligence : potential benefits and ethical considerations by
Publication Date: 2016
The ability of AI systems to transform vast amounts of complex, ambiguous information into insight has the potential to reveal long-held secrets and help solve some of the world’s most enduring problems. However, like all powerful technologies, great care must be taken in its development and deployment. To reap the societal benefits of AI systems, we will first need to trust them and make sure that they follow the same ethical principles, moral values, professional codes, and social norms that we humans would follow in the same scenario. Research and educational efforts, as well as carefully designed regulations, must be put in place to achieve this goal. International Business Machines Corporation (IBM) is actively engaged, both internally as well as with its collaborators and competitors, in global discussions about how to make AI ethical and as beneficial as possible for people as society.