The sudden, overwhelming proliferation and democratization of AI tools, and the mind-boggling pace at which they’re being developed and evolved, have formed a juggernaut of technological advancement that individuals and businesses alike are struggling to fully understand, let alone keep up with.
In its 100-plus-year history at the forefront of innovation, the path into the technological unknown is one IBM has walked many times before. The Maven Report’s Sheila Lothian recently sat down with Chris Zobler, Vice President of Sales, Data & AI at IBM to get an insider’s take on our current moment from the world’s OG AI company.
Sheila and Chris discussed the opportunities, risks and questions presented by AI-based innovation, as well as the steps organizations navigating these uncharted waters need to take to ensure they’re using these disruptive technologies appropriately, intelligently, and in ways that align with their business differentiators, strategic goals and company values. What follows is their conversation, condensed and lightly edited for clarity.
Sheila Lothian: Chris, how is the democratization of AI, through the emergence of generative AI tools, impacting your role at IBM? Is this moment fundamentally different somehow from what has come before?
Chris Zobler: Given the commercial availability now of generative systems, the biggest change to date has been the lowering of the bar for who can actually participate in the exploration of the power of AI. Our collective focus on using AI to drive automation and optimization of human tasks, combined with the ability to make better predictions, has been the driver behind the perceived value of AI. However, only those who were comfortable with leveraging techniques and languages like Python, Scala and R could participate in “dreaming big.”
Today, those barriers to entry have been drastically lowered, which has empowered non-technical advocates to look to generative AI to accelerate the delivery of business value without the overhead of complexity of writing code.
Ironically, the same challenges that we have been working at solving for over a decade with machine learning models are now becoming the focus of generative AI platform. Trust, transparency, data privacy, data quality and how to best use AI to build and maintain competitive advantage when everyone is using the same platforms, data and techniques has become a critical part of the narrative.
So, while the audience of participants in the conversation has expanded, the gating factors that have limited enterprise deployment of AI at scale are the same.
SL: How do you define appropriate usage of AI, and how is it different from ethical usage?
CZ: Intent is a critical decision point for me in distinguishing appropriate AI usage. At IBM, for example, augmentation of human intelligence—not replacement—is the primary focus of AI. The power of AI to tackle problems at a scale never before thought possible now allows us to make better decisions around business strategies, as well as how to support acceleration of everyday human tasks that tend to be the most time consuming across our enterprises. It also unlocks our ability to reach and connect with customers and employees on an entirely different level.
The use cases that support and augment how we make business decisions around supply chain optimization and hyper personalized customer support are great examples of how organizations are leveraging AI for good. If that same technology gets applied to target or exclude certain demographics or consumers, then very quickly the intent for which we are using AI is drastically altered.
The ethical element has to do with HOW you are leveraging your existing data to make many of these kinds of critical decisions. How we identify both conscious and unconscious bias, for example, becomes a key element which needs to be addressed in the data sets we are using to train any of our models. Aside from the process by which we build AI, its application to business and societal problems is where the distinction of HOW we use AI comes into the picture.
As organizations mature in their usage of AI for all use cases, checks and balances will clearly be needed. At IBM we have instituted an Ethics by Design framework as part of our core principles, with the goal of integrating technical ethics into everything we do in the AI space. The central mission of this framework is to enable AI as a “force for good” by embedding the principles of transparency, fairness, robustness, explainability and privacy as the foundational pillars of trust. Enabling AI workflows built with these principles is also one of the key components of our watsonx AI and data platform, which was announced at IBM’s Think conference in May.
SL: Everyone wants “in” on AI, and the available tools seem to be multiplying exponentially. Which business functions or units should be the highest priority for investing in or implementing AI-based solutions, and what does that evaluation process look like?
CZ: Employee efficiency and productivity, personalized customer experience and supporting sustainability efforts have quickly shot to the top of the list of where clients are investing. Not only are these areas with tremendous visibility and impact on an organization’s ability to differentiate their products and services with their end consumers, they are also the areas that tend to have the highest fragmentation of data, spread across multiple clouds and SaaS applications.
Evaluating where to start is usually a combination of two things. The first is, where do I think I can have the biggest impact on changing the experience for both the way my clients interact with my organization, and how my employees are supported in building and delivering exceptional personalized experiences?
The second core factor is, where do I have a solid foundation of data to build from? While generative AI tools have accelerated an organization’s ability to build new models at scale and speed, access to trusted, governed data of high quality is often the barrier to entry for tuning and customizing new models with domain specific data.
Learn more about IBM’s new watsonx AI and data platform at ibm.com/watsonx
SL: What are the risks associated with even the appropriate uses of AI tools and what controls, policies, guidelines and best practices should organizations implement to mitigate them?
CZ: The three Rs, as I like to call them, that cause the biggest concern around the usage of AI are: Reputation, Regulation and Revenue.
Every week, we see new use cases of the lack of visibility and control over what AI systems produce on the covers of newspapers and magazines all over the world. The damage caused when trust has been breached is often irreversible, and the impacts can be seen almost immediately. Combined with the rapidly evolving regulation, proving that you are building and deploying AI with trust, will also have an immediate impact on the speed at which organizations feel comfortable deploying new models.
The requirements around providing what, why, to whom and where models live have already become part of the narrative in the same way we saw data privacy regulation soar with the introduction of GDPR. In totality, how we manage and govern data as well as how we leverage that trusted data within the organization has quickly become a dinner table conversation, not one taking place in boardrooms only. (See also, “Leveraging your brand as you dive into the technological unknown,” here.)
SL: Given the potentially catastrophic consequences of a misstep, should enterprises try to manage the adoption of these technologies in-house or use an external partner?
CZ: The skills conversation has always been a driver for when and where to source external resources to support leveraging of new technologies. We will continue to see an evolution of business user driven tools to augment the developer-codified view of the world, so the kinds of skills that organizations will need will change.
In the short term, I anticipate the need to combine both the skills and business acumen of internal teams and resources with external expertise in emerging technologies. Ensuring there is alignment around corporate values, a focus on social justice and equality, as well as a shared mission around building AI using the core pillars of trust, is the best way to mitigate any additional risk when expanding the sets of skills and personas you will need to bring these technologies into the enterprise. (See also, “Ensuring humanity and equity in an AI-driven workplace”.)
SL: Finally Chris, how do you see the use of generative AI and other advanced AI tools evolving in the near term?
CZ: I expect to see a “slow down to go fast” approach really take hold in the coming weeks and months. While the whole world has seen the power of generative AI to use large language models to create beautifully written text-focused material, the risks and concerns around what may actually be produced, and the lack of visibility and transparency around how those models have been trained, is causing a lot of hesitation and pause.
As organizations build out their own AI governance practices, it will come down to balancing the promise of AI against the need for oversight, risk management and investment decisions. The only way I see this being possible is to build and deploy a framework with ethics at its core, with a focus on using technology as a force for good.