Skip to main content
AI

AI challenges: What product leaders should prepare for in 2024

AI is everywhere these days, and it should come as no surprise that we see tons of early adopters in tech. Dev teams are working overtime to build artificial intelligence into their procedures and their products — and the solutions SaaS companies offer are changing fast.

For product owners and managers, this sparks a number of all-new concerns and more than a few fresh challenges. We’d like to help you prepare for that. 

So, the Zenhub team connected with over 50 tech founders and leaders to learn about the hurdles they’re already facing and the challenges they expect.

Here’s what they told us.

We’ll be facing a lack of AI trust

Artificial intelligence has been met with a lot of resistance, and it doesn’t take an expert to see that. Users have a healthy sense of skepticism when it comes to new tech — and misinformation is an AI hot topic. A recent study by Forbes uncovered that more than 75% of consumers are concerned about AI-driven misinformation. 

Clearly, this could be a hurdle for software dev teams, and when it comes to shifting perspectives, FieldLogix founder Yukon Palmer says we may have a long road ahead of us.

“Users will have to learn to trust the outputs provided by AI systems over their own manual analysis,” he told us. “This will require behavioral changes by the users, and it may take time for them to transition away from manual decision processes.”

Win Big CEO Maurizio Petrone added that AI skepticism boils down to a “fear of the unknown” and a lack of understanding of how artificial intelligence works. For software dev teams, this could be a sign that it’s time to move user education up on our to-do lists.

AI challenges regarding privacy will continue

Another hurdle we’re likely to face is the longstanding worry users have when it comes to security and data — a concern PixelStorm director Daniel Florido says is justified.

“Data collection for AI programs has already resulted in several data breaches,” he told us, adding that product development teams should plan to have their defenses in order.

“Leaders in AI adoption need to shift from a reactive to a proactive strategy and recognize the growing threat posed by sophisticated adversaries,” he says. 

Jan Chapman, co-founder and managing director at MSP Blueshift, added that these concerns will be especially important for teams that build products in sensitive arenas such as healthcare or finance.

He advises that teams take a flexible approach and prepare to accommodate the strict privacy and governance requirements they’re likely to deal with.

We’ll need to get creative with storage solutions

Product owners and managers may also run into storage concerns as they integrate AI tech that relies on large data sets.

Property Inspection Pros owner Sol Kruk has already encountered this issue — and shared his thoughts on how we might overcome it. Faced with cost and space restraints that come with “conventional storage,” Kruk has his sights on an alternative.

“Embracing flash storage has been promising. It’s reliable, cost-effective, and accommodates the demands of modern machine learning,” he told us.

Between storage and security, it’s clear that data management should be on our radars if we plan to get a leg up.

User adoption may come with limits

Changing user habits can be tricky, and many teams will face a steep learning curve as they build AI-powered features into their products. This will likely create limits on how we can realistically apply this tech — at least for now.

OceSha CEO Rohan Hall shared his thoughts on this topic. 

“Projects that expect their customers to learn AI prompting will fail,” he said. “This is like asking people to learn SQL to use your app. People who are initially curious will take the time to learn, but you won’t really have Uncle Bob doing it.”

There was a time when the ability to use Google effectively was a “special skill.” Today, it’s a minimum expectation. Maybe AI prompting will follow a similar pattern — at some point.

In the meantime, it will be up to our teams to find easy and intuitive applications for AI tech, and to create solutions that meet our users where they’re at.

AI may perpetuate problematic messaging

AI couldn’t function without our influence. Its ability to mimic human actions, decisions, and behaviors is what makes it so impressive. And while this will sound great (to some of us), there is definitely a catch.

Human influence isn’t always positive.

AnyIP cofounder Khaled Bentoumi was quick to call attention to this, pointing out AI’s tendency to “exaggerate human biases and give them an appearance of objectivity.”

To get a better handle on this, let’s look at a real-world example.

This summer, Buzzfeed made headlines when it prompted Midjourney to generate a Barbie representative for 194 countries. Insider summed this up nicely with a headline that called out “blatant racism and endless cultural inaccuracies.” Media chaos ensued. 

Clearly, this isn’t the kind of bias we want finding its way into our products, and since AI lacks the ability to self-interrogate or make decisions based on ethics — for now — we need to be thinking about how we can prevent this.

Teams may struggle to build new habits

When it comes to building artificial intelligence into our workday, teams of all kinds are likely to face transitional challenges. AI can’t force new habits, and if we want high rates of adoption, we may need to get creative.

TechAhead CCO Shanal Aggarwal shared this concern with us — and offered his thoughts on finding a solution.

He says the key is “encouraging a technologically adaptable culture and highlighting the concrete advantages AI offers in terms of streamlining processes and enhancing human abilities.”

So again, user education. Even for teams that work in tech.

There is, however, a flipside to this: what if AI could actually help teams build better habits? This was what the Zenhub team was thinking when they implemented AI labels, a label-suggestion tool that aims to encourage teams to regularly select the correct labels to improve organization. 

“Labels might seem like a small place to start [with AI], but they’re at the heart of every report and analysis you and your team will do,” says George Champlin-Scharff, Zenhub’s VP of Product. “Without a clear understanding about how you’re investing in your team, you can’t make the right calls. AI is helping our teams gain clarity where they once had chaos.”

It seems that the key to implementing AI safely and painlessly is to use it to improve UX, not demand more from the user. 

So maybe AI could be a solution to giving agile teams the helping hand they need to form preferred habits rather than force them to create new ones. Psst – we actually posted a blog last week on how agile AI might help foster agility. 

We can (and should) plan for growing pains

It’s clear that AI can be a value-add to our products and our process. At Zenhub, we’ve already introduced some agile AI solutions to boost user experience, and our dev team has access to several AI-powered products.

That said, we should expect and plan for a learning curve as we navigate adoption and integration and learn how to make AI work for our teams and the people who use our products.

Project Management AI

Share this article

New
Work smarter, not harder. With Zenhub AI

Simplified agile processes. Faster task management. All powered by AI.

Get Early Access

Hone your skills with a bi-weekly email course. Subscribe to Zenhub’s newsletter.

Return to top