We believe in Ethical AI Governance that is industry led and regulated only in cases where AI applications risk human wellbeing.
Our Position
We believe governance of AI is an important step in ensuring this emerging technology is used responsibly.
We believe in four core principles that should guide AI development and use:
Privacy & Security – AI should be secure and respect personal privacy.
Reliability & Safety – AI should operate in a reliable and safe way.
Transparency – AI systems should be documented and easy to understand.
Accountability – People implementing AI should be held accountable for those systems.
We believe in an open and relatively regulation-free environment in which innovation can flourish. Where self-governance and standardization ensures the mentioned principles are implemented. However, we also believe there are opportunities for legislative regulation where AI systems risk human wellbeing.