Managing the Risks of AI
- Sophie Bell
- Mar 24
- 4 min read

Artificial intelligence (AI) is changing the world, but there are serious risks associated with this powerful technology. A lack of understanding of how to manage these risks can lead to a reluctance among businesses to incorporate AI into their operations. These reticent businesses are missing out on the potentially enormous opportunities afforded by the new technology.
With careful thought and planning, the risk related to AI can be incorporated into existing risk management controls. This allows businesses the opportunity to use the technology safely.
Why controlling AI matters
Imagine AI as a super employee. This member of staff is extremely fast and knowledgeable, producing detailed reports in seconds. They don’t need sleep and are happy to work 24 hours a day, seven days a week. They also come at a reasonable price for their potential level of output.
However, this employee sometimes makes things up. Their responses can be opaque, as they struggle to explain how they arrived at their solutions. Their audit trail is questionable and hard to find. They are careless with the data shared with them, potentially leaving it scattered all over the internet. Plus, their output can be very biased.
Would you employ that candidate? Well – you might, but you’d want some controls around their output and functioning. AI-related risks are substantial and complex, so the controls around them are crucial to the technology’s success. Strong AI controls and risk management systems enable businesses to use AI while staying safe.
Identifying and implementing risks and controls
If a firm’s existing risk management framework is strong and thorough, it should be able to incorporate any new technology or system, including AI. In theory, managing the risks associated with AI should not require a whole new risk framework.
A business’s priority should be to identify any AI-related risks. You can’t manage what you don’t understand. Some of the risks are typical of any new product or service. But there are additional risks specific to AI including bias, discrimination, data privacy, breach of copyright/IP, lack of accountability, inaccurate outputs, unethical use and loss of control. For example, AI tools can exhibit bias due to the historical data that’s been used to train the tool in the first place, or the underlying algorithms are programmed with choices made by development teams that may lack diversity. There are also risks to adopting AI incorrectly, and risks to not adopting it at all.
Once you’ve identified all the AI risks, you can start implementing the relevant controls. AI controls share the same preventative, detective and corrective characteristics as other controls, but there are nuances which are specific to AI. For example, AI controls should have a greater focus on prevention and very early detection as AI risk can move through a business very quickly. Picking the risk up at inception is crucial to stopping the rapid spread.
Furthermore, as AI is automated, there should be a greater proportion of automated controls in place. AI operates considerably faster than humans, so human surveillance can’t keep up with AI risks as they develop. Controls need to be automated instead, using AI to control AI, but always with human oversight too.
As these automated controls are typically built into a code or programme, they are not obviously easy to identify, making them hard to measure and monitor. Ask, are these automated controls known and identified? And do they have an owner? A control which is not owned is not a control.
Design effectiveness and testing are the next considerations. The greatest focus for design effectiveness is to ensure the control is designed to be able to do what it’s supposed to do. This should be considered when the control model is being developed, as part of the process of design and implementation. Effectiveness should then be measured at the user-testing stage. After that, any type of update should be incorporated as per change management guidelines.
AI ownership
A business needs a dedicated member of staff to define their AI strategy, and that person owns the risks too. The lead might be the CTO, the CIO, the Chief Product Officer or the Data Officer. This will depend on the nature of the business. Although cross collaboration is key; everyone is a risk manager. It’s up to the lead to make sure all staff understand the risks of AI.
Controls should then be reviewed regularly and challenged, including audit and detection to assess whether the AI controls are meeting their objectives. This structured control management system needs to be ongoing, not just once a quarter or once a month. Businesses need real-time risk management, with a focus on prevention and early detection.
Risks and opportunities
AI is here to stay, and businesses that fail to embrace it are taking a major strategic risk. But AI without robust risk and controls management also brings risk.
Firms need to put in place an internal control system so they can take advantage of the opportunities offered by AI, without getting themselves into trouble. Risk and control ownership are crucial to the technology’s success. Controls are actually enablers.
