The European Union General Data Protection Regulation (GDPR) requires organisations to be able to explain decisions made by their algorithms. The European Commission published a report on Ethics Guidelines for a Trustworthy Artificial Intelligence in April 2019, announcing it will soon propose legislation for a coordinated European approach on AI. On July 2019, the EU Commission published a factsheet on Artificial Intelligence for Europe which underlines the importance of AI to boost the EU’s competitiveness and ensure trust based on European values, as well as its role in improving people’s lives. This factsheet describes the EU’s role in AI with the financial investments the Commission is planning to make, and gives examples of AI projects conducted by the Commission. Financiers using algorithms and big data might have a greater interest in how the Commission plans to ensure AI remains ethical in services.
At international level, 42 countries (the OECD’s 36 member countries, along with Argentina, Brazil, Colombia, Costa Rica, Peru and Romania) formally adopted the first set of intergovernmental policy guidelines on AI (OECD Recommendation on AI) in May 2019. The Recommendation aims to foster innovation and trust in AI by promoting the responsible stewardship of trustworthy AI while ensuring respect for human rights and democratic values. Subsequently, in June 2019, the G20 Digital Economy Ministers also outlined its commitment to a human-centred approach to AI, publishing a series of G20 AI Principles drawn from the OECD Recommendation on AI.
Other notable government initiatives include setting up AI ethics councils or task forces, and collaborating with other national governments, corporates, and other organizations. Though most of these efforts are still in initial phases and do not impose binding requirements on companies (with GDPR a prominent exception), they signal growing urgency about AI ethical issues.