With AI adoption in business on the rise, it is vital to consider the true impact and any risk factors to ensure AI is performing for you. Often organisations find themselves shying away from AI adoption amid concerns over what it entails. On the other hand, some organisations fear being left behind and rush to adopt AI without considering appropriate use cases and the potential downsides of poorly considered implementation.

There is a double constraint when it comes to adopting too fast or being slow to uptake appropriate AI use. The best way to address both scenarios is with proper AI governance. 

In February and March this year, BJSS held AI governance roundtable events in Leeds and London. Over 20 senior leaders attended the events hosted by Laura Musgrave, Lead Researcher in Responsible AI, SPARCK, and Simon de Timary, Head of Data & AI, BJSS.   

The groups explored key topics, including; challenges related to intellectual property (IP), establishing effective governance rules, navigating reputational risks and addressing legislative challenges. One of the key reoccurring themes was that of AI adoption and the considerations that come with it. In this blog we explore AI adoption, examples when appropriate use cases are not identified and what can happen when companies do not embrace AI.

Understanding Urgency

There is a balancing act between understanding the pressure to quickly implement AI and avoiding inappropriate use that is not aligned with a clear strategy or lacking proper protocols. 

According to 2023 data from Statista, 91% of businesses foresee impact from the surge of AI and generative AI, ranging from moderate to transformational. This only further emphasises the need for your organisation to craft a robust AI strategy. 

statistic_id1447527_predicted-impacts-on-professional-services-in-the-next-five-years-worldwide-2023 (1)

Source: Thomson Reuters. (2023). Predicted impacts on professional business services in the next five years worldwide in 2023. Statista. Statista Inc.. Accessed: May 08, 2024. https://www.statista.com/statistics/1447527/predicted-impacts-on-professional-services/ 

To make informed decisions about AI and how it could benefit an organisation, it is key to initially assess the business needs and goals. Identifying potential use cases for AI and understanding the expected return on investment (ROI) is a good starting point when developing an AI strategy. 

When AI Goes Wrong 

As AI continues to advance, instances where AI systems have veered off course or exhibited unintended behaviour have become increasingly prominent.  

DPD disabled its AI chatbot after it seemingly malfunctioned, causing customer service issues. The bot appeared to respond inappropriately to queries, even swearing in responses to customers. Meanwhile, Air Canada's chatbot provided travellers with misinformation, such as incorrect COVID-19 testing requirements for destinations. Air Canada was then responsible for refunding customers who acted on the information that was provided by the chatbot.  

These examples are reminders of the importance of responsible creation, deployment and management of AI systems. From biased decision-making to unexpected consequences, these cases provide valuable insights into the complexities and potential risks associated with AI adoption. 

Considering Appropriate AI Use Cases  

Finding the right use cases for AI implementation, which aligns with business objectives, is crucial for successful AI use. The correct use case can help optimise resource allocation, mitigate risks and build confidence and support within the organisation.  

Some retail companies have implemented AI-driven chatbots without considering appropriate use cases or customer needs. This has led to frustrating customer experiences and a disconnect between the technology's capabilities and customer expectations.  

In healthcare, there have been instances where AI algorithms were deployed without thorough consideration of patient privacy and ethical implications. This inevitably raised concerns about data security and patient consent.  

In each of these cases, organisations may have rushed to adopt AI technology without carefully evaluating its suitability for their particular context. The result being suboptimal outcomes for service users and staff. 

Some leading organisations are approaching AI with extreme caution. Apple, alongside major leaders in the financial industry, has banned the use of generative AI tools for employees. However, measures like this could result in organisations running the risk of shadow IT. In which case employees may use AI tools without the oversight or protection of a company-wide IT policy regarding its use.  

One example of an appropriate AI use case is the work that BJSS carried out for NICE (The National Institute for Health and Care Excellence) on the development of a regulatory framework for AI applications in healthcare. AI simplified the regulatory landscape for digital health innovators by creating an accessible online platform. This service, developed by BJSS for NICE and other agencies, consolidated and clarified regulations, offering support, information, and advice. It helped developers navigate complex regulatory requirements, accelerating the adoption of innovative health technologies in the NHS. In this case AI was used to help improve patient outcomes, enhance diagnostic accuracy and accelerate medical research. This is one example case of AI's potential to revolutionise healthcare. 

Embracing AI Responsibly 

AI awareness and training are vital to empower individuals and organisations to understand AI's potential, mitigate risks and leverage opportunities effectively. Without such knowledge, there's a risk of misinformed decision-making and missed chances for innovation and growth. 

Acknowledging the challenge in interpreting AI-generated information is crucial. Users should not believe everything AI says. Understanding its limitations and potential biases helps prevent misinterpretation, ensuring informed decision-making and fostering a healthy scepticism toward AI-generated outputs. 

One conversation that arose in both roundtable discussions was that of the importance of embracing AI responsibly. When AI governance is carried out correctly it becomes a key driver for enabling innovation. AI can be powerful and extremely effective when used appropriately.  

For example, Project SEEKER was developed by BJSS in collaboration with Heathrow Airport, Microsoft, UK Border Force and Smiths Detection. Here, AI is used to automatically detect illegal wildlife items in luggage and cargo at borders. Using AI algorithms trained on CT scans, the solution alerts enforcement agencies when contraband is detected, aiding in the fight against illegal wildlife trafficking with over 70% accuracy. 

In another AI success story, BJSS collaborated with Care Fertility to enhance embryo selection during IVF. By analysing time-lapse images, AI algorithms identify key developmental stages, automating embryo annotation with accuracy comparable to manual methods. This innovation drastically reduces embryologists' workload, enhances reproducibility, and ultimately improves patient treatment outcomes, marking a significant advancement in IVF technology. 

A Checklist For Effective AI Adoption 

AI awareness 

It is crucial that you assess and understand not just the opportunities that AI can bring to your organisation, but the potential pitfalls, too. What is often overlooked is how ready your organisation is to adopt AI. BJSS offers an AI readiness assessment that evaluates an organisation’s readiness and effectiveness regarding the use of AI and provides tailored recommendations to fully leverage AI as a value driver. 

AI strategy 

As mentioned throughout this blog, choosing appropriate use cases for AI in your organisation is key. Your AI strategy should act as a handbook for all stakeholders to have access to for a full understanding of each step of the strategy and how AI is set to be part of the organisation. 

AI governance 

AI governance is crucial for organisations to assess and manage risks and ensure overall confidence in AI use. It involves establishing policies, procedures and controls to mitigate ethical, legal and operational risks associated with AI deployment. By implementing robust governance frameworks, organisations can promote transparency, accountability, and responsible AI practices. 

The BJSS eBook, Responsible AI: A comprehensive guide to governance, provides a thorough overview of AI governance, policies, frameworks and practices that should guide AI development at ideation and beyond.  

Striking The Right Balance Between Urgency And Caution 

Rushing into AI adoption without proper governance can lead to suboptimal outcomes, while hesitancy can result in missed opportunities. Awareness, training, planning and governance are essential for responsible AI use. By finding this equilibrium, organisations can harness AI's potential while mitigating risks, fostering innovation and maximising benefits for all stakeholders.