As artificial intelligence (AI) systems begin to control safety-critical infrastructure across a growing number of industries, the need to ensure safe use of AI in systems has become a top priority.
Quality assurance and risk management company DNV GL has launched a position paper to provide guidance on responsible use of AI. The paper asserts that data-driven models alone may not be sufficient to ensure safety and calling for a combination of data and causal models to mitigate risk.
Entitled AI + Safety, the position paper details the advance of AI and how such autonomous and self-learning systems are becoming more and more responsible for making safety-critical decisions.
... to continue reading you must be subscribed