The Red Cross’ environment accounts

Types of DCR (Data-Centric Regulation) has joined the European Union’s effort for the mindful utilization of artificial intelligence (AI) by participating in the newly signed Council of Europe Framework Convention on AI and Human Rights. This historic agreement, signed on September 5, 2024, marks the first legally binding international framework designed to ensure that AI systems are developed and implemented in ways that respect human rights, democracy, and the rule of law.

The Convention aligns with the principles outlined in the EU AI Act, reinforcing a human-centric approach to AI development and promoting transparency, risk management, and accountability. It includes innovative mechanisms like regulatory sandboxes for safe AI experimentation and sets strict oversight for high-risk AI systems to prevent potential harm to fundamental rights.

This agreement is seen as a significant step toward global AI governance, as it offers a common approach to manage AI systems while balancing innovation with the necessary safeguards to protect societal values. Countries beyond Europe, such as the United States, Canada, Japan, and others, also signed the Convention, indicating a broad international commitment to ethical AI development.

“DCR” in this context is not a commonly known acronym, but it might refer to “Data-Centric Regulation” or a similar initiative aimed at regulating how AI is developed and used, focusing on principles like transparency, accountability, and ethical standards.

Recently, the European Union joined a global effort for the mindful utilization of artificial intelligence (AI) by signing the Council of Europe Framework Convention on AI and Human Rights. This convention is the first legally binding international agreement aimed at ensuring that AI technologies are designed and used in a way that respects human rights, democracy, and the rule of law. It aligns with the EU’s AI Act, which sets out rules for deploying AI systems that prioritize safety and fairness while supporting innovation.

The convention addresses key areas such as transparency in AI-generated content, rigorous risk management for high-risk AI systems, and mechanisms for responsible AI experimentation. It involves a broad coalition of nations, including the United States, Canada, Japan, and several European countries, reflecting a collective commitment to ethical AI development​

European External Action Service.

Exit mobile version