Getting Started with AI: 8 Strategies for Responsible Use of AI

Responsible use of AI - AI for data analysisArtificial intelligence (AI) and its powerful counterpart, generative AI (gen AI), are revolutionizing the way enterprises operate. The value of AI is undeniable, from streamlining operations and improving productivity to creating groundbreaking products, these technologies offer a competitive edge. However, for enterprise leaders, navigating the path to responsible AI adoption requires a clear understanding of potential risks. Here are eight strategies to make your AI journey a success:

1. Anchor your organization with the right guiding principles for AI

An AI approach should be grounded in principles that reflect your specific business context, values, industry, risk appetite, expertise, and environmental realities, including existing tools, processes, and regulatory environment. Every organization will have a different approach to AI based on the nature of their business and how data-intensive it is. Some principles that we see our clients navigating include:

  • Build vs. buy vs. adapt: Assess whether to construct AI capabilities in-house, purchase off-the-shelf solutions, or adapt existing tools to meet specific needs.
  • Partner vs. own: Decide whether to collaborate with external partners or consultants to build and deploy solutions or develop proprietary AI solutions and support deployment internally.
  • Best-of-breed vs. ‘one killer platform’: Choose between integrating specialized AI tools from various vendors or opting for a comprehensive, all-in-one platform (or somewhere in between).
  • Trailblaze vs. cautiously adopt: Determine whether to lead innovation in AI by driving applications or products in novel areas to differentiate service or competitive positioning; or proceed cautiously and in phases by observing industry trends and best practices, waiting for core tools that you already have to integrate AI, and focus on deploying the most tangible applications of AI first.

Each of these principles depends on the following considerations:

  • Alignment with cloud principles: Your AI strategies must be in harmony with existing or planned cloud principles, including scalability, interconnectivity, security, risk, and cost reduction. Cloud is a critical enabler for more cost-effective access to the heavy computing needed for advanced AI techniques.
  • Risk and security appetite: Evaluate tolerance for risks and establish robust security measures to safeguard data and AI systems from data pipelines, model approaches, and data processing.
  • Change story and messaging: Craft a narrative around AI adoption that communicates the transformative potential and addresses stakeholder or employee concerns effectively.

2. Select the right use cases for implementing AI solutions

Many organizations are drawn to using AI for its promises of increasing operational efficiency and providing a competitive edge. However, AI isn’t a fix-all solution. Picking the right job for AI is crucial. Not every problem needs an AI solution; an AI solution may be a sledgehammer-sized tool for a nail-sized problem. AI works well for tasks such as spotting patterns, finding knowledge, creating content, or making predictions, but it’s not great for everything (yet).

In determining suitable AI use cases, organizations should prioritize tasks where AI can add tangible, measurable value. Some common and proven use cases include areas such as enhancing customer experience via conversational AI-powered chatbots, fraud detection in insurance and finance, personalized recommendations in e-commerce, supporting end-user productivity, or optimizing supply chain logistics. Tasks that require human judgment, creativity, or ethical considerations may not be suitable for AI. If organizations don’t choose the right tasks for AI or use it without consideration, it can waste time, effort, and money.

Organizations must select use-cases deliberately and only use AI where it can genuinely help. By doing this, organizations can make the most of AI’s benefits and stay ahead of their peers. By focusing on tasks where AI can genuinely enhance human capabilities or streamline processes, organizations can effectively leverage its transformative potential. This approach ensures that AI initiatives align with business objectives, fostering innovation, driving growth, and ultimately achieving sustainable competitive advantage.

3. Establish clear ownership and accountability for AI capabilities

As AI becomes more integrated into our lives, questions regarding accountability arise. For example, who is responsible if AI generates a misinformed or biased email response? Clear ownership and accountability across all stages are paramount to ensure responsible development and use of AI.

The responsibility landscape is multifaceted. AI users require proper training to understand the capabilities and limitations of these tools. They must be able to leverage AI responsibly and intervene when necessary. Managers play a critical role in ensuring their teams are equipped and using AI in accordance with company policies. They must take ownership by establishing clear guidelines for AI use, including risk management protocols and robust cybersecurity measures for AI-related incidents. Incident response plans should be in place to address potential issues effectively.

Furthermore, organizations developing internal AI models should ensure that AI and machine learning engineers and developers train AI systems responsibly. This training includes ensuring unbiased outcomes and building in robust safety considerations for future use. Organizations using third-party vendors should ensure that AI vendors hold themselves accountable for working with reliable and secure AI providers while delivering ethical solutions to clients and fully disclosing potential risks.

The full stack of an AI ecosystem requires deliberate planning around where each piece resides to maximize value and minimize risk. Here are some key areas that may be in or outsourced depending on your overall approach to AI:

  • Business users and customers: Engaging stakeholders and end-users to understand requirements and optimize AI solutions for maximum business value.
  • AI model development: Establishing a team and supporting practices to develop or ground predictive or generative models.
  • Core infrastructure management: Ensuring robust infrastructure is in place to support AI operations efficiently.
  • Vector database and LLM management: Overseeing the organization and optimization of large-scale data structures essential for AI model training and inference.
  • Cyber security and risk: Implementing and maintaining stringent security measures to protect AI systems and data from potential threats.
  • Integration architecture: Establishing seamless connections and workflows between AI systems and existing data sources for effective data utilization.
  • Monitoring and support: Implementing and maintaining tools and processes to continuously monitor AI performance and provide timely support for any issues that may arise.

4. Define success metrics for AI implementation to measure progress and return on investment (ROI)

It is essential to define what success looks like before deploying AI to ensure the success and sustainability of AI implementation. Clearly defining the objective will ensure that the AI solution is tailored to the right issue with the most appropriate vendors.

Establishing measurable success criteria is crucial. For instance, if deploying a conversational AI bot for customer interaction, success could be measured by improvements in retention rate, CSAT score, or IVR containment rate. Having a proper performance management framework can help guide the creation of effective success criteria. These relevant metrics, specific to the AI projects, can help demonstrate the return on investment (ROI) and guide future AI decisions based on data.

Organizations benefit from devising a contingency plan in case the defined success criteria aren’t met. Regularly measuring these criteria prevents overinvestment and allows for adjustments or termination if necessary. This ensures resources are used efficiently and mitigates potential losses.

5. Get your business case, financial infrastructure and budgets setup based on your approach to AI implementation

Unlike traditional software or platforms with larger upfront and amortizable capital expenditures with predictable operational spending, financial planning around AI is more similar to Cloud due to a broader push from key vendors to leverage utility or pay-as-you-go models.

Each AI tool and platform differs in its commercial approach and pricing nuances. It’s easier to estimate costs for AI use cases like extracting data from invoices where volumes and metrics can be forecast compared to generative AI that enables end-user productivity, such as chat and knowledge management capabilities where there is more variance in how many queries could be sent to the AI.

Cost transparency is harder to achieve, particularly on utility-based models. To combat this, understanding the cost constructs of AI and having a clear view of the benefits that are being driven are critical to tracking overall ROI.

6. Plan for future growth and ensure AI solutions can adapt to increasing data volumes and changing needs

As  an organization’s AI adoption  grows, so will the demand for AI. Imagine an AI system for customer service, overwhelmed by a surge in customer data as the company expands. To avoid such roadblocks, planning for scalability from the start is crucial. Design your AI with a modular architecture. This allows you to easily add or upgrade components as data volumes increase or your business needs evolve. Think building blocks – a flexible system that adapts to your changing landscape. A modular AI architecture allows for easier addition or upgrades as data volumes increase or business needs evolve. This will ensure a flexible AI system that remains efficient and effective when adapting to the everchanging landscape.

Cloud-based AI solutions offer another layer of scalability. The cloud’s infrastructure can automatically adjust to growing data demands, ensuring an AI system can seamlessly adapt to an organization’s ever-changing needs. By prioritizing scalability, the AI system can remain a valuable asset, growing and adapting alongside the organization.

7. Prepare your workforce for AI integration, addressing concerns, and building trust

The fear of AI replacing workers can be a major roadblock for a leader looking to implement AI within their organization. A 2021 McKinsey study found 70% of workers worry about AI automation. To overcome this and ensure a smooth transition and employee buy-in, focus on building trust and collaboration through proper change management.

This involves empowering your workforce with training in skills that complement AI, such as critical thinking and problem-solving. Openly address job displacement concerns and emphasize AI as a collaborator, not a replacement. Identify use cases for AI to handle routine tasks, freeing employees for higher-level strategic thinking and innovation.

Finally, build trust through transparency. Educate employees about AI’s role within the organization and how decisions will be made, ensuring human oversight remains paramount. By addressing concerns and fostering collaboration, AI can be a powerful a tool to enhance productivity and drive innovation.

8. Create contingency plans for potential AI system disruptions and outages to ensure business continuity

A well-oiled machine is only useful if it keeps running. The same is true for AI systems. When implementing AI systems, it is crucial to consider how to mitigate business continuity risks when AI systems go offline. To prevent such disruptions, plan for contingencies.

Develop a comprehensive disaster recovery and business continuity plan that outlines procedures for AI system outages. This plan should include data backups, redundancy measures, and failover protocols to ensure minimal disruption to core operations. Regularly test and update this plan to guarantee its effectiveness in a real-world scenario. Additionally, explore alternative solutions for critical tasks. Can manual processes be implemented as a temporary backup in case of AI failure? By having a plan B in place, organizations can ensure business continuity even when the AI system experiences downtime.

Beyond the strategies outlined above, here are three key tips that should be top of mind for leaders who have already embarked on their AI implementation journey and want to drive sustainable success.

1. Ground AI with enterprise data to reduce AI hallucination

AI, particularly generative models, can fall prey to hallucination, generating seemingly realistic but factually incorrect outputs. A recent example involved a social media post authored by an AI that contained fabricated statistics about a political candidate.

To combat this, two common strategies are emerging. Retrieval-augmented generation (RAG) is a technique where the AI model searches for and retrieves relevant factual information from credible sources such as enterprise data or knowledge articles first, which acts as a reality check for the AI. By incorporating these “data anchors” into the generation process, RAG ensures the AI’s outputs are grounded in contextual facts and avoids misleading claims. Combined with fact-checking integration, which verifies information in real-time using external sources, RAG empowers enterprises to leverage the power of AI while maintaining the highest standards of accuracy.

By implementing these strategies, enterprises can significantly reduce AI hallucinations and ensure the outputs remain reliable and accurate. However, implementing these strategies requires specialization and expertise in setting up and refining large language models that some organizations may not have. This approach needs to align with broader enterprise principles around AI and can be executed in-house or through a third-party provider or product.

2. Limit unsanctioned AI tools and ensure robust security controls are in place

Enterprise adoption of AI can unlock immense potential, but leaders must prioritize mitigating privacy and security risks. For example, enterprise employees using unapproved GenAI tools like ChatGPT pose a significant privacy risk. These tools may lack robust security, leak data during use, and operate outside organizational oversight, which can lead to unwanted data breaches.

In addition, when fine-tuning or training AI models, it is important to assess the sensitivity of the training data. It is possible for the AI to inadvertently reveal or expose some of that data in its outputs resulting in leaking sensitive information, which can cause data breaches and breach company and customer confidentiality.

Furthermore, AI systems without robust security controls can be vulnerable to manipulation by attackers seeking to distort outputs or inject bias. For instance, researchers demonstrated how adversarial examples, malicious inputs designed to fool AI systems, could be used to cause traffic accidents involving self-driving cars. To safeguard against such attacks, implementing robust security measures is essential. This includes encryption techniques to protect data integrity, along with intrusion detection systems to identify and prevent malicious attempts to manipulate AI algorithms.

3. Unveil bias before it takes root

AI algorithms are only as good as the data they’re trained on. Unfortunately, inherent biases in data can lead to discriminatory outcomes. A well-documented example is Apple’s credit card algorithm, which was found to disproportionately disadvantage women when applying for credit.

To prevent such issues, proactive measures are key. First, ensure your training data accurately reflects the real world. Seek diverse datasets that represent the populations your AI will interact with, either through existing enterprise data or data procured through third parties or partners. This helps mitigate bias from the very beginning. Second, don’t wait for problems to surface. Use bias detection tools to identify potential biases within your algorithms. Techniques like data scrubbing to remove skewed information or using fairness metrics throughout development can help mitigate bias.

Building fairness into AI from the ground up is essential. Additionally, establishing clear ethical guidelines for AI development and deployment is crucial. This ensures fairness, transparency, and responsible use of AI, fostering trust with stakeholders and generating outcomes that benefit everyone. By taking these steps, you can ensure AI remains a powerful tool for good within your organization.

Artificial intelligence is a promising technology that can offer many benefits. However, it also poses significant risks that need to be carefully managed and mitigated. These strategies discussed in this article can empower you to harness AI’s power while minimizing risks. We can help you design your AI strategy and guide you through responsible implementation. Partner with us to unlock AI’s potential and build a thriving future.

By: Najeeb Saour, Practice Leader, Charles Xie, Senior Engagement Manager & Raj Suri, Associate

We can help you unlock the potential of AI.

FIND OUT HOW