By Oliver Presland, Senior Vice President of Consulting Services, Ensono Your business won’t achieve AI excellence without mastering these five essentials.
Just short of flashing neon lights, we’re continually reminded of AI breakthroughs promising to transform our business data. AI’s potential to drive innovation, improve efficiency, solve complex problems and enable true competitive advantage is seemingly endless—but only if you’re ready to release it.
IT leaders are under growing pressure to tap into their data and leverage it to drive improved AI outcomes for their businesses. A recent survey conducted by the IBM Institute for Business Value of over 3,000 CEOs spanning over 20 countries shows that this pressure is not just coming from the board of directors, but also investors, creditors and even their own employees. 75 percent of the CEOs surveyed feel their organizations have the knowledge and skills to incorporate AI, but only 29 percent of their direct reports in the C-suite agree.1 Where is the disconnect? How can organizations ensure they are making the most of all the data at their disposal to gain meaningful value from AI applications?
What changes do they need to make across their technology infrastructure, culture and strategy?
As we learned back in the early days of public cloud adoption, focusing only on technology is a recipe for a failed transformation. To truly move the needle and build a culture of successful AI adoption, organizations must think more holistically. Let’s dig a little deeper and consider the five essentials for achieving AI success.
1“CEO decision-making in the age of AI,” IBM Institute for Business Value, June 2023.
The creation of a well-articulated business case is not just a formality. It is a strategic imperative to gain stakeholder buy in from the beginning of your data and AI journey, identify and mitigate potential pitfalls, and surface opportunities for a competitive advantage.
As recommended in the Summer 2023 Maven Report article, “The AI Adoption Blueprint,” digging deeply into questions like, “How can AI improve our products and services?”, “Where can AI streamline operations?” and “How can AI generate new revenue streams?” will help you to crystallize and communicate the value proposition of AI in clear, quantifiable terms inclusive of KPIs and predicted enhancements. This will enable resource allocation to areas that offer the highest value and provide stakeholders with the insights needed to assess the ROI of AI initiatives, enabling them to distribute budget, talent and infrastructure effectively.
Once you’ve established the “why” of AI implementation with a solid business case, the next step is defining the “how” through a proof of concept (PoC). The previously mentioned blueprint offers guidance on exploring the right questions to achieve a successful result. The areas of opportunity identified in your business case should be your starting point for seeing, in more specific detail, where AI can enhance efficiency, effectiveness or customer experience. Tasks that are time-consuming, prone to human error, or require sifting through large amounts of data are areas where generative AI can often add substantial value. However, graduating to true AI/ML applications over time and ingesting your own proprietary data can lead to even greater outcomes. Business value drivers such as better customer experiences and greater predictive capabilities can ultimately lead to new revenue opportunities and greater brand affinity.
Building your PoC does not need to be a heavy lift. In fact, with widely available low-code AI tooling, simple prototypes can be whipped up in days against real‑world or simulated scenario data. This will allow you to “fail fast” and iterate on the initial PoC based on feedback. Insights gained from testing and validation with a model can be fine tuned, adjusted and scaled to incorporate additional features or data sources to improve performance and robustness.
If this can succeed in a controlled environment, presenting a tangible PoC only bolsters confidence with key stakeholders and creates excitement across the business. In addition, you’ll have the documented learnings, mitigated risks, best practices and learned insights to validate and prove the feasibility and scalability of your project.
Data practice inefficiencies can slow down all aspects of a business’ data journey, from ingestion, categorization and analysis to insights and, eventually, their AI initiatives. These problems can stem from a variety of sources including ill-fitting governance, technological inefficiencies and business unit ownership, leaving data increasingly siloed.
To fully leverage AI’s potential, businesses must be able to rely on high-quality, trusted data with democratized access. Core to this is establishing clear data ownership and accountability with the cultivation and advancement of a robust data community. Just as it takes a village to raise a child, ensuring top-notch data quality requires the collective effort of an entire organization, from IT leaders through to operations.
For instance, organizing regular meetups for data owners and stewards can help coordinate activities that impact the data platform and, consequently, AI models and dependencies. These meetings also offer a valuable forum for discussing upcoming projects with AI and data experts, enabling early identification of data opportunities and understanding the associated requirements and workload.
Within this community, it is imperative not only to define roles and responsibilities for maintaining data integrity and implementing effective processes, but also to provide comprehensive support, training resources and skill development opportunities.
In an ideal world, businesses would have access to real-time analytics to allow them to apply good quality data to AI PoCs, react to events, and make informed and impactful decisions, fast. However, many organizations still face issues with the accessibility of their data.
It’s commonplace to rely on an overnight export of data from legacy systems for reporting and analytics. By the time this data is accessible, it will be too late to react or pivot to the events of the day. The answer isn’t necessarily shifting all that rich data from legacy platforms to the cloud. While some applications may benefit from being modernized and migrated to the cloud, it may simply be a matter of making all the data accessible and usable for analytics by connecting it to the cloud.
World-class businesses need to architect their systems to deliver a reliable, singular view of data that is ready to feed their AI models. If they want to realize their data-driven ambitions, they need to recognize that investment will be needed to achieve the results they’re looking for. Building structured, AI-ready data management architecture will give organizations the capacity and flexibility needed to collect, store, analyze and respond to the sum total of their data and allow them to elevate their use of analytics, moving from descriptive “What happened?” analytics, to predictive “What might happen next?” insights, to prescriptive “What should we do now?” decisions.
If architected correctly, businesses will be able to start making enquiries of data in situ, rather than expending manual effort by going outside to source and create a suitable dataset before preparing answers to data questions from executives. This will build trust with leadership by providing timely insights.
If data skills are lacking, data and analytics capabilities are often siloed in one area of the business, disconnected from other departments. In addition to slow decision making, organizational structures built around centralized data gatekeepers risk a failure to provide operational and useful data for AI use cases. It also breeds distrust, with excluded employees not feeling part of a broader strategy, and therefore less likely to meaningfully engage with data. The same is true among leaders, who often view data analysis as something to leave to certain experts on the team, but don’t fundamentally engage with or trust the results.
Businesses need to look at new approaches to harness new data, analytics skills and AI skills, with clear governance in place to build trust and maintain security. This should be invested in as an organization-wide endeavor—for example, an AI Center of Excellence (CoE) with a mandate from the very top and controlled but democratized access, whether through training or rolling out new tools to lower the skills barrier to entry.
As also recommended in “The AI Adoption Blueprint,” your AI CoE should function as a permanent operational and governing body that guides all aspects of your AI program. Internal members can be both full-time technology, operations, data and security leaders with daily responsibilities for AI adoption, implementation and management, and part-time leaders from across the organization—Legal, HR, Finance, Board members and AI project owners within business units—who have a vested interest in your AI program and need both visibility and input into the process.
And, while cost concerns are always top of mind, balancing internal staff with external specialized partners can help you achieve the literacy and skill level your business needs.
As you integrate these disparate groups into a centralized function, adopting a common set of processes is essential. Shared approaches to project management, technical decisions, project owner onboarding, AI and data science training, risk/security decisions, organizational change management and training, financial governance, operational services and governance, and vendor management will help to ensure alignment and enable velocity.
Organizations are increasingly turning to AI to bolster their defense against sophisticated threats. One notable advancement in AI applications is its integration within cyber teams, where it serves as a powerful ally in detecting and mitigating complex attacks. By leveraging AI-driven algorithms to analyze behavioral anomalies, cyber teams can swiftly identify and respond to potential threats, enhancing overall security posture.
Moreover, organizations are implementing automated controls to mitigate the risk of accidental misuse of approved AI tools. These controls not only enhance operational efficiency but also serve as a proactive measure to ensure compliance with regulatory standards and internal data policies. By enforcing strict access controls and establishing comprehensive identity management protocols, organizations can automate data privacy measures, safeguarding sensitive information and mitigating compliance risks.
In line with regulatory requirements and internal governance frameworks, organizations are establishing clear, acceptable use policies for AI tools. These policies outline usage guidelines and restrict access to only those tools that have undergone rigorous review by internal compliance committees. A whitelist of approved tools, along with any exceptions, should be published to provide transparency and ensure alignment with organizational objectives. (See also, “Leveraging your brand as you dive into the technological unknown,” The Maven Report, Spring 2023.)
However, it is imperative to acknowledge that many will still face challenges in achieving comprehensive security and compliance measures. In instances where no centralized view of employee usage exists and acceptable use policies are undefined, organizations may be exposed to heightened risks of data breaches and regulatory non-compliance. As such, there is a pressing need for organizations to prioritize the establishment of robust security protocols and compliance frameworks to mitigate potential threats effectively as they adopt AI.
The integration of AI in security and compliance processes represents a significant milestone in organizational maturity. By leveraging AI-driven solutions, organizations can enhance threat detection capabilities, streamline compliance efforts and fortify defenses against emerging cyber threats. Moving forward, it is essential for organizations to continue investing in AI technologies and refining their security and compliance strategies to adapt to evolving threat landscapes and regulatory requirements.