Artificial Intelligence (AI) and Related EU-Policies Glossary
Artificial intelligence (AI) refers to the simulation of human intelligence processes by machines, particularly computer systems. It encompasses a variety of technologies that enable machines to perform tasks that typically require human cognition, such as learning, reasoning, problem-solving, understanding language, and decision-making. AI systems can be categorised into two types: narrow AI, which is designed for specific tasks like voice recognition or image analysis, and general AI, which aims to replicate human-like intelligence across a wide range of activities. Through techniques like machine learning, neural networks, and deep learning, AI continues to evolve, driving innovation in industries such as healthcare, finance, and transportation.
Generative AI: Refers to algorithms, such as ChatGPT, that can be used to create new content, including audio, code, images, text, simulations, and videos. In social services, often generative AI can, for example, be used to brainstorm care and support plans or to organise users’ records.
European Union (EU) Coordinated Plan on Artificial Intelligence: A strategy created by the EU to make sure AI technology is developed and used in a way that benefits everyone in Europe. It coordinated and brings together the efforts of EU countries to encourage the growth of AI, while making sure it is safe, ethical, and respects people's rights. The plan focuses on three main goals: increasing investment in AI, creating rules and policies to ensure AI is used responsibly, and supporting research and development to make AI work better for society. Essentially, it aims to make sure AI helps improve life for people across Europe, while being fair and trustworthy.
European Commission (EC) Ethics Guidelines for Trustworthy AI: Prepared by the EC High-Level Expert Group on AI (HLEG), which is composed of 52 independent experts, this document offers recommendations and guidance on how to foster and secure the development of ethical AI systems in the EU. The core principle of the EU guidelines is that the EU must develop a 'human-centric' approach to trustworthy AI that is respectful of European values and principles.
EU Guidelines on Ethics in AI: Ethical guidelines for stakeholder when developing, deploying, implementing or using AI systems, focusing on: 1. Human agency and oversight; 2. Technical robustness and safety; 3. Privacy and data governance; 4. Transparency; 5. Diversity, non discrimination, and fairness; 6. Societal and environmental well-being; 7. Accountability.
The EU AI Act: The first-ever comprehensive legal framework on AI worldwide. The aim of the rules is to foster trustworthy AI in Europe. The AI Act sets out a clear set of risk-based rules for AI developers and deployers regarding specific uses of AI. The Act was designed to be broad and horizontal, with cross-cutting application, to avoid the need for multiple sectoral regulations for AI. Commendably, the AI Act explicitly mentions some specific risks that are heightened for people with disability, linked notably to biometric identification algorithms used in employment contexts (e.g., in recruitment, promotion, firing, task assignment, and monitoring).
World Health Organization (WHO) Guiding Principles for AI Design and Use: A set of six principles, published by the WHO, aimed to ensure that AI works for the public interest in all countries, specifically in regard to health and healthcare: 1. protecting autonomy; 2. promoting human well-being, human safety, and the public interest; 3. ensuring transparency, explainability and intelligibility; 4. fostering reproducibility and accountability; 5. ensuring inclusivity and equity; 6. promoting AI that is responsive and sustainable.
The Organistion for Economic Co-operation and Development (OECD) Report on Using AI to Support People with Disability in the Labour Market: Building on interviews with more than 70 stakeholders, this report explores the potential of AI to foster employment for people with disability, accounting for both the transformative possibilities of AI-powered solutions and the risks attached to the increased use of AI for people with disability. It also identifies obstacles hindering the use of AI and discusses what governments could do to avoid the risks and seize the opportunities of using AI to support people with disability in the labour market.
The above-mentioned policies and guidelines are referenced and explained in detail in the EPR Briefing on the Ethical Use of AI in Services for People with Disabilities. Following an in-depth analysis and consideration, they were used as a basis to formulate the EPR recommendations for policy-makers and service providers on how to ensure the ethical and inclusive provision of AI. The briefing is available here.