4.1. Introduction
Generative AI raises a range of issues that need to be addressed not just by the entities that develop it but by all of society, users and non-users:
- Ethics and responsibility. Generative AI can create content that is misleading, offensive, or even illegal. This raises ethical questions about who is responsible for the content generated and how control algorithms can be implemented to prevent abus.
- Biased training data. Generative AI models learn from data, and if the data used to train them are biased (cultural biases, gender, etc.), AI can learn from these biases and replicate them in the content it generates.
- Controlling the information generated. It is difficult to ensure full control over the content generated by GAI. This may lead to situations where the generated content does not reflect the user’s intentions or may be misinterpreted.
- Disinformation and manipulation (deepfakes). The ability to generate convincing content can be used to create fake news or manipulate information.
- Copyright and intellectual property. Who owns the rights to AI-generated content? This is a complex legal issue that has not yet been clearly settled.
- Data privacy and security. Generative AI models store and process large amounts of data. It is important to ensure that these data are protected and not used improperly. Are there concerns about the misuse of personal information?
- Generation of harmful content. AI can be used to generate content that is offensive, discriminatory, or harmful. How should this type of content be handled?
- Algorithm transparency. The methods used to train and operate the algorithms in generative AI can be opaque and difficult for people outside the field to understand.
- Equitable adoption and access. How do we ensure that the benefits of generative AI are distributed equitably and do not widen social and economic gaps?
- Regulations and standards. The laws and regulations governing the use of AI are still evolving and may be difficult to enforce in this innovative environment. In fact, in many cases they are just recommendations rather than regulations.
- Energy consumption and carbon footprint. Training and using large generative AI models can require immense amounts of energy. This is problematic if this energy comes from non-renewable sources that generate greenhouse gas emissions.
- Impact on the labour market. Could the adoption of generative AI technologies have implications for jobs in some sectors? Will new jobs be generated in the same proportion as the jobs that are destroyed?
- Authorship. How do we ensure people can distinguish between human-created and generative AI-generated content? This is essential for transparency and for people to trust the information they use.
- Inequality of access and concentration of power. Access to AI technology may not be equitable, and companies or organizations with more resources may have an unfair advantage when using these tools. This could lead to power becoming more concentrated and greater social inequality.
Addressing these problems and challenges will require a combination of ethical conduct, regulatory oversight and ongoing technological development. It is important that communities around the world, including researchers, companies, policymakers and multidisciplinary teams from different branches of science (philosophers, physicists, mathematicians, computer scientists, linguists, sociologists, etc.) work together to find sustainable and responsible solutions.
The 2019 Beijing Consensus on Artificial Intelligence and Education contains guidelines on how to apply a humanistic approach to AI in education.
UNESCO has published a range of recommendations and guidelines on the ethical, safe and equitable use of generative AI in education and research.
UNESCO’s November 2021 Recommendation on the ethics of artificial intelligence provides a regulatory framework for addressing the controversies surrounding artificial intelligence, including in education and research.
In 2021 UNESCO also published AI and education: guidance for policy-makers, setting out specific recommendations for developing policies on the use of AI in education.
In 2023 UNESCO issued the following document of specific recommendations: Guidance for generative AI in education and research.
UNESCO’s review of current national AI strategies shows that countries are adopting different policy responses, ranging from banning GAI to assessing the need to adapt existing frameworks or urgently formulating new regulations.
The new European artificial intelligence (AI) act, known as the EU Artificial Intelligence Act, is the world’s first regulation dedicated exclusively to AI.
European Parliament (2024). Regulations of the European Parliament and of the Council laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts (2024/0138(COD)). https://www.europarl.europa.eu/doceo/document/TA-9-2024-0138_EN.pdf
This law seeks to balance the benefits of AI with potential risks to health, safety and fundamental rights. The key points of this law are as follows:
- Risk-based regulation. The law classifies AI systems into four risk categories: minimal, limited, high and unacceptable.
- Minimal risk:
- Content recommendation apps: systems that recommend news articles, videos or music to users, such as algorithms used by platforms such as Spotify or YouTube.
- AI-powered computer games: games that use AI to improve the user experience without affecting fundamental rights or security.
- Limited risk:
- Conversational robots (chatbots): automated customer support services that may require exceptional transparency to ensure users know they are interacting with a machine.
- Sentiment analysis: systems that analyse social media comments to determine the public’s attitudes towards products or services.
- High risk:
- Biometric surveillance systems: such as facial recognition used in the surveillance of public places, which could significantly impact privacy and civil rights.
- AI in human resources: resume filtering and candidate selection tools that can significantly influence job decisions.
- Predictive healthcare: algorithms that predict ailments based on personal health data, which require rigorous data protection measures and can directly affect the health and well-being of people.
- Unacceptable risk:
- Social scoring: systems that assign a score to individuals based on their personal behaviour or attributes, which both public and private entities could use.
- Vulnerability exploiting AI: algorithms designed to identify and exploit an individual’s psychological weaknesses for marketing or other manipulative applications.
- Real-time facial recognition by State law enforcement: in most cases, this use is prohibited due to the risks to privacy and individual freedoms, except in very limited circumstances, such as the prevention of serious offences.
- Specific prohibitions. Some AI practices, such as real-time remote biometric identification in public places, are entirely prohibited except in particular circumstances.
- Transparency and compliance. High-risk AI systems will need to be registered, and their suppliers must demonstrate compliance through compliance assessment processes, ensuring transparency, human oversight and cybersecurity.
- Impact on fundamental rights. Companies implementing high-risk AI systems are asked to make impact assessments on fundamental rights to ensure the protection of individual freedoms and avoid discriminations.
- Innovation and exceptions. The law provides a framework for AI experimentation within a regulated context and allows controlled testing with strict security conditions. In addition, the law does not apply to AI for military purposes only or research activities until the marketing phase.
- International regulation and cooperation. The law seeks to be a global benchmark in AI regulation and establish international regulations in collaboration with other countries and international organizations.
- Minimal risk:
This European legal framework is an important step towards the safe and ethical adoption of artificial intelligence, as it ensures that technological development benefits society without compromising fundamental values or citizen safety.
The EU Artificial Intelligence Act will begin to apply by 2026. The two-year wait for the law enforceability has been established for several practical and strategic reasons:
- Adaptation and compliance. Companies and other entities that develop or use AI technology need time to adapt to new regulations. This includes reviewing and possibly restructuring their systems and processes to ensure compliance with the law’s fundamental rights protection, transparency and security standards.
- Conformity assessment. High-risk AI system providers will have to submit their products to compliance assessment processes. These processes, which include testing, documentation, and necessary adjustments, can be complex and time-consuming to complete properly.
- Training and resources. Regulatory and supervisory authorities, as well as companies, need time to train staff, develop protocols, and establish systems necessary for tracking and law enforcement.
- International coordination and standards. Because many AI systems operate across international borders, effective implementation requires a transition period for better coordination with other jurisdictions and international standards.
- Dialogue and feedback. This period also allows stakeholders, including enterprises, AI experts, civil rights groups, and the general public, to provide feedback on the law’s enforcement, which could lead to adjustments or refinements before full implementation.
This preparatory phase is crucial to ensure that all stakeholders are fully prepared and the law is effectively implemented, minimizing risks and maximizing the benefits of AI technologies within European society.
In short, the regulatory supervision of GAI in education and other areas requires a series of policy steps and measures based on a human-centred approach to ensure it is used ethically, safely and equitably.