Ethical AI Frameworks & Toolkits

Ethical AI Advisory provides the following services
- Education on Ethical AI Principles and Frameworks: Leadership team awareness training and education on the principles and frameworks involved in Ethical AI that meet government or international guidelines
- Design and development of Ethical AI Frameworks: Advice and workshops provided in order for leadership teams to design and develop Ethical AI Frameworks specific to their organisation, industry sector, existing regulations, client groups and objectives
- Design and development on Ethical AI Toolkits: Advice and workshops to design and develop a range of tools to implement the Ethical AI principles contained in the framework. This includes practical tools to design, deploy. measure and assess the effectiveness of the operationalisation of Ethical AI.
Governments and interest groups across the world have been collaborating on and developing frameworks for Ethical AI that are then deployed in each country, however there is not yet an international standard. In support of this Human Rights Commissions have also been very active in the development of a Human Rights lens for the development and deployment of AI.
In Australia, on November 7, 2019, the Minister for Industry, Innovation, and Science, Karen Andrews, released the official AI Ethics framework for Australia.
This is not legislation or regulation rather a guideline for organisations to use when designing, developing, integrating or otherwise using Artificial Intelligence. Participation or use of the framework is voluntary. The guidelines can be found here.
In March 2020, Standards Australia released the priorities for AI.
The 8 key principles of the AI Ethics Guidelines are:
1. Human, social and environmental wellbeing:
Throughout their lifecycle, AI systems should benefit individuals, society and the environment.
2. Human-centred values:
Throughout their lifecycle, AI systems should respect human rights, diversity, and the autonomy of individuals.
3. Fairness:
Throughout their lifecycle, AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.
4. Privacy protection and security:
Throughout their lifecycle, AI systems should respect and uphold privacy rights and data protection, and ensure the security of data.
5. Reliability and safety:
Throughout their lifecycle, AI systems should reliably operate in accordance with their intended purpose.
6. Transparency and explainability:
There should be transparency and responsible disclosure to ensure people know when they are being significantly impacted by an AI system, and can find out when an AI system is engaging with them.
7. Contestability:
When an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or output of the AI system.
8. Accountability:
Those responsible for the different phases of the AI system lifecycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.
The Australian Human Rights Commission has been working diligently on establishing guidelines and raising questions related to the impact that Artificial Intelligence may have on the rights of individuals.
Recommendations raised by the Commission includes that technology adhering to Human Rights guidelines should:
- Protect individual dignity and promote the flourishing of communities
- Comply with Human Rights Laws, domestic and international
- Protect and promote Human Rights
- Be fair, inclusive and reduce harm
- Instil public and individual trust
- Promote equality and not be discriminatory
- Be accountable, lawful, transparent and explainable
- Have human oversight and intervention
- Have strategies, standards and human impact assessments
It is our strong recommended that organisations consider both Ethics and Human Rights considerations in designing, developing, integrating, or otherwise using Artificial Intelligence.
Useful resources from other countries who have developed similar Ethical AI frameworks include:
- United Nations Guiding Principles on Business and Human Rights
- IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems
- European Commission (EC) Ethics Guidelines for Trustworthy Artificial Intelligence
- UK – Data Ethics Framework and Code of conduct for data-driven health and care technology
- Canada – Algorithmic Impact Assessment
- Singapore’s Proposed Model AI Governance Framework
The Australian Human Rights Commission has been working diligently on establishing guidelines and raising questions related to the impact that Artificial Intelligence may have on the rights of individuals.
Recommendations raised by the Commission includes that technology adhering to Human Rights guidelines should:
- Protect individual dignity and promote the flourishing of communities
- Comply with Human Rights Laws, domestic and international
- Protect and promote Human Rights
- Be fair, inclusive and reduce harm
- Instil public and individual trust
- Promote equality and not be discriminatory
- Be accountable, lawful, transparent and explainable
- Have human oversight and intervention
- Have strategies, standards and human impact assessments
It is our strong recommended that organisations consider both Ethics and Human Rights considerations in designing, developing, integrating, or otherwise using Artificial Intelligence.
Useful resources from other countries who have developed similar Ethical AI frameworks include:
- United Nations Guiding Principles on Business and Human Rights
- IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems
- European Commission (EC) Ethics Guidelines for Trustworthy Artificial Intelligence
- UK – Data Ethics Framework and Code of conduct for data-driven health and care technology
- Canada – Algorithmic Impact Assessment
- Singapore’s Proposed Model AI Governance Framework
Ethical AI Toolkits
Toolkits to enable organisations to best design and deploy an Ethical AI strategy may include the following:
- Risk Assessments – human, organisational, societal, governmental, environmental
- Monitoring tools
- Easy appeal mechanisms
- Collaboration tools – intra and inter-organisational
- Public consultation tools
- Impact Assessments
- Internal/External Reviews
- Privacy Impact assessments
- Evaluation metrics
- Best practice guidelines
- Data requirements
- Removing data bias
- Legislation and regulation checklists
- Safe use guidelines
Ethical AI Advisory works with organisations to develop or deploy these Toolkits so that they may effectively deliver against an Ethical AI Framework.
We'd love to hear from you.