Skip to main content
Project Management

Will AI help or hurt data integrity? Both. Here’s why

You don’t need to be elbows deep in the tech world to feel the effects of the technological renaissance that AI is shaping. 

Almost overnight, AI has created infinite possibilities for innovation in every industry. Consequently, it’s keeping leaders up at night as they try to decipher the key to using AI to stay ahead of the competition.

But, there’s something that’s stopping them (or at least frustrating them)–data integrity. Zenhub’s most recent survey found that 70% of leaders felt data quality was their biggest challenge when trusting AI with their business success. 

To better understand the AI landscape as it relates to the data integrity problem, I’ve spoken with Aaron Upright, Zenhub cofounder, and Juan Roesel, Zenhub’s Lead AI Product Engineer.

In this blog, we examine AI for project management as an example of an area where data integrity is critical, why leaders are concerned about AI, and why AI is well-positioned to solve the data integrity problem that worries them. 

AI’s project management use cases and why data integrity is important

Some functions we’ve seen AI perform in project management systems are: 

  • Advanced search (helping users find info in a PM system faster and summarizing that info more concisely)
  • Help center navigation (helping users get faster help learning the platform or troubleshooting) 
  • Summarizing information (breaking things down into action items, acceptance criteria, reformatting information, etc.) 
  • Providing recommendations (recommending actions for users to take, like delegating or unblocking tasks or inputting information) 

Risks of using AI–how and when data integrity could be compromised

When we use AI, we expect particular outcomes–the outcomes that we, humans, would have produced manually. This, however, isn’t always the case with AI. Poor AI outcomes typically come down to the quality of the data the AI uses to produce those outputs.

“The areas that will pose the biggest risk are not how AI is applied but in the AI models themselves,” says Aaron. “For example, if there’s a lot of bias in the model that a company relies on to train that data set or if the data is incomplete, what might come back may not be helpful.” 

When systems contain biases and incorrect or missing information, AI could duplicate these errors. “The risk is that you’re cascading the noise. If we have a data set filled with inconsistencies, we will get noisy results that should not be used to make decisions,” says Juan.  

At this point, most people are aware that AI isn’t always going to be 100% accurate. The problem is that despite this, people may still blindly accept AI results. “Right now, humans can’t be asleep at the wheel,” says Aaron. “AI is perfect when it comes to suggesting outputs, but humans should always have the final say in terms of what output gets accepted and applied.” 

For where AI is at in 2024, control is key. This is not just for ensuring AI’s usefulness in the present moment–but also for its future usefulness. “To mitigate this concern of having poor data propagating through the pipeline, having a human in the loop is key. Having a human in the loop will help steer the AI in the right direction,” says Juan. In other words, how humans interact with AI tools will play a key role in how those models learn and get better. 

Another potential risk to data integrity is transparency—knowing why these outputs were suggested. “A lot of these AI suggestions don’t explain how they arrived at that suggestion, creating a lot of questions and challenges for the user,” suggests Aaron. This becomes an issue when humans can’t verify the correctness of the output to ensure that the data is accurate and, if not, to steer the AI in the right direction. 

Why is AI well-positioned to handle data integrity?

Of course, while every new technology comes with new risks, it also comes with new opportunities. For AI, the tech actually has an opportunity to solve many of the challenges it exacerbates–mainly the data integrity problem. Here’s why: 

AI can be an expert in data categorization

Our team started implementing AI to maintain data quality in Zenhub because AI can be steered towards becoming an expert in categorizing the data that it’s trained on. “This means that it can help backfill missing values from our database, which, in Zenhub’s case, would be assigning labels to issues that don’t have them. This can also help extend and enrich the data around an issue,” Juan explains.

Enriching Issues with missing data makes their content more valuable to the team relying on them. It also can standardize the outcomes of all data, regardless of where the data comes from. “Standardizing data is useful for categorization because if we’re trying to train a model that needs to categorize issues regardless of sources, then having normalized outputs will help facilitate that.” 

AI is creating behavioral change

Another way AI is solving the data integrity problem is one you might not even be aware of when you’re using it–the fact that interacting with this technology fundamentally changes your behavior. “AI can create behavioral change in a positive way–i.e., helping people facilitate manual tasks that they might otherwise put off or ignore,” says Aaron.

Think about it–in the case of Zenhub’s AI labels, the suggestion is there and easy to accept vs. having to go through all the labels and choose the correct one. In the case of Zenhub AI acceptance criteria (AC), users may not otherwise write in-depth AC, so having it done by AI makes it more likely that the data will be collected and thorough. 

AI can now handle larger data sets than ever before

Finally, it’s all about timing. Previous AI developments didn’t have the computing power to handle as big of a challenge as data integrity. Now, there’s no better time for this AI use case. 

“It would have been a huge technical problem several years ago if we had said, hey, we’re going to feed 27 million Issues into this model that will categorize them. You just didn’t have the computing power to do that. A lot of the advances in AI right now are in solving problems that couldn’t be solved before,” says Aaron. 

We need patience and understanding to get the most out of AI

Like any other revolutionary technology, AI has benefits and drawbacks. Ironically, in the case of data integrity, one of its benefits is solving one of its drawbacks. 

Of course, when it comes to leveraging AI for data integrity, the importance of timing comes into play yet again–AI needs time to learn from your team. “When you give the team more time to work with the AI, the AI has more data points to identify patterns,” says Juan. 

So, to get the full benefits of AI without the pitfalls, we need a little patience and an understanding of its strengths and weaknesses. With this, we can leverage this great technology to outshine our competitors, stay productive, and make our lives easier. 

Want to start using AI to improve the data quality of your project management system? Get a demo of Zenhub.

Share this article

New
Work smarter, not harder. With Zenhub AI

Simplified agile processes. Faster task management. All powered by AI.

Get Early Access

Hone your skills with a bi-weekly email course. Subscribe to Zenhub’s newsletter.

Return to top