How Intuit democratizes AI development across teams through reusability
original address:How Intuit democratizes AI development across teams through reusability (opens new window)
We found success in a blended approach to product development—a marriage of the skills and expertise of data, AI, analytics, and software engineering teams—to build a platform powered by componentized AI.
# SPONSORED BY INTUIT
AI has become the core of everything we do at Intuit.
A few years ago, we set out to embed AI into our development platform with the goal of accelerating development velocity and increasing individual developer satisfaction. Building AI-powered product features is a complex and time-consuming process, so we needed to simplify it to enable dev teams to do so with speed, at scale. We found success in a blended approach to product development—a marriage of the skills and expertise of data, AI, analytics, and software engineering teams—to build a platform powered by componentized AI—what we at Intuit refer to as Reusable AI Services (RAISE) and Reusable AI Native Experiences (RAIN). These allow developers to deliver new features for customers quickly and build and integrate AI into products without the typical pain points or knowledge gaps.
Today, it’s not just our customers who benefit from our AI-driven technology platform; our developers do too. Whether it’s building smart product experiences, or keeping design consistent across multiple products (opens new window), our investment in a robust AI infrastructure has made it possible for technologists across the company to build AI capabilities into Intuit products at scale for our more than 100 million global consumer and small business customers.
In this article, we’ll share Intuit’s journey to democratizing AI across our organization, along with lessons learned along the way.
# Simplifying the path to integrating AI
In the beginning, when our developers wanted to add AI features to their projects, they couldn’t just plug in a library or call a service. They had to reach out to our data scientists to create or integrate a model. Most machine learning (ML) models are built on a bespoke basis because data is typically specific to a process or domain and doesn’t translate well outside of the identified scenario. While this is changing with multi-modal AI (opens new window), in practice most systems still train on a specific corpus where they are expected to perform (images, text, voice, etc).
We realized that in order to make it easier for our developers to integrate AI just as they would with any other feature or component, we had to overcome three key challenges:
- Cross-domain communication
- Data quality standards
- Process improvements
# Cross-domain communication: Getting devs and data scientists on the same page (and tech stack)
Because product development teams work in different ways, aligning on an inclusive, common language when discussing how to integrate AI into the development process was key to fostering collaboration.
Software engineers and data scientists use different vocabulary in their day-to-day work. Data science terminology, for example, is very precise, especially around concepts like model performance, and can be difficult for non-experts to understand. Data teams might use terms like ROC (receiver operating characteristic curve), macro-F1, or hamming loss. Similarly, software engineers are usually focused on durability, scalability, and behaviors of distributed systems. Such technically-specific language can lose meaning in translation.
Simplifying such technical terminology—and having good documentation to explain what it means—made it much easier for developers and data scientists to communicate. Over time, developers will pick up new knowledge as their domain-specific comfort level improves. But we don’t want every developer and data scientist to have to learn a whole new set of jargon just to get started.
To address this, we adjusted the way we communicated based on context: using precise language when necessary (for accuracy purposes), and more approximate verbiage when the same message could be conveyed through more accessible terms. For example, when data scientists described data entities, we found that it was faster for engineers to understand once these were translated into rows, columns, and fields, as well as into objects and variable values.
We also found that mapping complex topics to business-specific terminology helped get everyone on the same page. For example, translating terms like classification, regression, and propensity scores into business use cases, such as pricing predictions or likelihood to resubscribe, made the concepts more accessible. Ultimately, we found that investing in finding a common ground and devising a more inclusive approach to communication resulted in better collaboration.
Equally pivotal to our success was bridging the worlds of software developers and data scientists by seamlessly integrating AI into existing processes. We had to find a way to support technology stacks our developers were accustomed to, so we mapped interfaces in the world of AI onto constructs they were familiar with. We built continuous integration/continuous delivery (CI/CD) pipelines, REST (Representational State Transfer) and GraphQL APIs, and data flows to build confidence in the platform’s integration across various domains.
With everyone speaking the same language and working in the same workflows, we turned our attention to the data we rely on to create AI-driven features.
# Data quality: Being good stewards of data means aligning on standards of quality
As a fintech company that deals with customers’ sensitive information, we have a higher bar for data access than may be the standard in other industries. We abide by a set of data stewardship principles (opens new window), starting of course with the customer’s consent to use their data.
While technologists are eager to leverage AI/ML to deliver its benefits to customers, using it to solve the right problems in the right ways involves nuanced decision-making and expertise. While traditional API integration and state management in a distributed microservices world is already enough of a challenging task for most engineering teams to handle, AI-driven development requires a different level of complexity: identifying the optimal use cases, making sure the data is available, and capturing the right metrics and feedback.
But at the heart of AI/ML is data, and that data needs to be good to get good results. We aligned on a process of storing and structuring data, creating feedback loops, and systematically building data quality and data governance into our platform.
Having clean data was a non-negotiable—we couldn’t allow our core data to be polluted. At the same time, speed was crucial. These two factors can sometimes come into conflict. When they did, we decided to handle things on a case-by-case basis, as it quickly became clear that a blanket policy wouldn’t work.
Once an ML model has been trained and put into production, that isn’t the end of its need for data. ML models need a feedback loop of data signals from the user to improve their predictions. We recognized that this was new territory for some of our developers, and that they needed to account for more time for the models to gather results. Once developers got used to this, feedback loops became better integrated into the process.
However, the developers creating those loops also needed to have access to data. Most of our data scientists are used to dealing with writing big, complex SQL queries. However, you can’t expect an engineering team that wants to leverage ML in their daily work to train an algorithm to write highly complex SQL queries against a back end Hive table, as they may not have the same experience. Instead, we set up GraphQL or REST API endpoints that allowed developers to use a familiar interface.
We had a shared language, and we had an understanding of how to use data in our features. Now we needed to tackle the hardest and most time-consuming portion of feature development: development processes and the people in them.
# Process deficiencies: This meeting could have been an API
In the past, when a developer wanted to build a new feature with AI, the process went something like this:
- Developer has an idea (e.g., an AI-powered autocomplete).
- Developer speaks to the product manager to see if it’s something customers would benefit from.
- Product manager speaks to a back-end data scientist to find out if the data is available.
- Product manager speaks to front-end and back-end engineers to see if the relevant text field can be modified.
- Back-end engineer speaks to the data scientist to find out how to connect the data.
- Developer builds the feature.
We set out to streamline the process, enabling dev teams to build AI-powered features in a fraction of the time, as follows:
- Introduced rigorous standards for software integration, including proper syntax and semantics for describing how different types of software interact with each other.
- Built self-serve software components and tooling to make it easy to consume and implement those standards.
- On an ongoing basis, we’re building discovery mechanisms so that these components can be easily found and consumed.
So how does this improved process work in practice? Using the same example of an AI-powered autocomplete, we would provide the developer with a UI component that automatically takes user inputs and feeds them into our data lake via a pre-built pipeline. The developer just adds the UI component to their front-end code base, and the AI immediately starts learning what the user has typed to begin generating predictions.
Today, if an engineering team thinks a feature is valuable, data science leadership provides access to the data, algorithms, facilities to train the algorithm, and anything else they need from an AI or data perspective. No more waiting for months on a Jira request—developers can just go in, do the experiment, get the results, and find out quickly whether their feature will deliver value to the customer.
# After AI integration, solving for scale
Once we managed to successfully integrate AI into our development platform, the next question was: How do we scale this across our organization? It can take several months to develop a complex ML model from end to end. When we looked at our processes, we realized that we could make improvements and optimizations that would bring that down to weeks, days, and even hours. The faster we can build models, the more experimentation we can do and the more customer benefits we can deliver. But once we began to scale, we ran into a new set of challenges.
The first challenge was reusability. As mentioned previously, a lot of AI/ML features developed today aren’t reusable because models trained on data specific to a single use case don’t tend to generalize outside of that domain. This means developers spend a lot of time rebuilding pipelines, retraining models, and even writing implementations. This slows down the experimentation process and limits what an organization can achieve.
On top of that, since development teams don’t necessarily know what has already been built, they might end up building something that already exists. We uncovered our second challenge: duplication. By the time we had dozens of teams building data pipelines, we realized a lot of duplication was going on, and solutions that worked well for one group couldn’t scale across an entire organization.
This is how we arrived at Reusable AI Services (RAISE) and Reusable AI Native Experiences (RAIN). Software developers reuse components all the time. There’s no need to reinvent the wheel if there’s a method, class, or library that does part of what you’re trying to do. How could we apply reusability into our platform to solve for scale with AI?
Eventually, we realized the level of AI adoption and scalability we wanted was only feasible with a platform approach. We set out to identify solutions with potential for a broader set of applications, and invited teams to collaborate as a working group to develop scalable solutions. Getting the right people in the same room enabled sharing and reuse to drive innovation without duplication. We started building cross-cutting capabilities to be used across a range of different use cases for any team focused on building innovative new AI-driven products and features.
# A truly AI-driven platform: making it RAISE and RAIN
The objective was simple: create the foundational building blocks developers need to build AI into our products with speed and efficiency, while fostering cross-functional collaboration and simplifying approval processes. After addressing the roadblocks that were slowing us down—the different ways our teams spoke about their work, improving data quality, and streamlining processes—we were able to take our componentized AI services and turn them into RAISEs and RAINs that our developers could then integrate into Intuit’s end products, building smart and delightful customer experiences.
Our new platform, with AI at its core, provides developers with a marketplace of data, algorithms, and models. We standardized the metadata that developers and data scientists contributing to every model, algorithm, and service so that they are visible and understandable through our discovery service. We even use this metadata to describe the data itself through a data map, making it easy for developers to search the platform and see if what they need is already available. The platform also picks up updates and new releases and continuously prompts the development process to ensure AI-powered features provide the best possible customer experience. Today, AI-driven product features that used to take months can now be implemented in a matter of days or hours.
Our journey to democratized AI has not been a fast or simple one. It has required a complete change of mindset and approach. Has it been worth it? Absolutely. Aside from the compelling customer and business benefits, our data scientists have become better software engineers and, in turn, our engineers have developed a richer understanding of the limitations and possibilities of AI and how it can make an impact.
Fundamentally, we believe that democratizing AI across an organization empowers development teams to build products that deliver outstanding customer experiences. However, the journey to reach democratized AI is not a fast or simple one: it requires a complete change of mindset and approach for most organizations.
Was it worth it? Absolutely. Without our commitment to democratized AI, it would not have been possible for our development teams to deliver smart product experiences. It removes barriers to collaboration and ultimately leads to a virtuous cycle for developers and data scientists alike that’s driving innovation with speed at scale for our consumer and small business customers.