As AI application becomes more powerful and easier to deploy, we are seeing a number of emerging standards and frameworks for governing AI, including specific legislation.
The EU AI Act, while still being finalised, is poised to deliver a number of strict requirements for AI-enabled products sold within the EU.
Based on their risk level applications must adhere to standards of:
High-risk applications will require assessment and certification with respect to these requirements and must be registered.
A range of international standards are under development which govern specific aspects of AI technology including:
Within the Australian context, a number of state and federal frameworks for governing AI are also under development. The New South Wales Government has published a framework for conducting an AI Assurance Assessment, which evaluates:
As government and enterprise-scale organisations move towards broad usage of AI-enabled solutions, it is expected that both international standards and the relevant national legislative frameworks will be leveraged to ensure that AI products and their applications conform to emerging societal expectations of responsible AI, and that broad legal mechanisms such as product liability, as well as AI-specific mechanisms will be used to enforce these expectations.
All organisations deploying AI solutions will soon be in need of practical mechanisms for governing the AI solutions in use and demonstrating the compliance of the solutions with relevant standards and legislation.
KJR will utilise its VDML methodology to guide you in your development of robust and reliable Machine Learning models to deliver these AI solutions.