Zigzag is a dog training app that offers personalized, breed-specific training plans to help guide dog owners from their puppy's early stages to adulthood. It also provides 24/7 access to live coaches who work with pet owners as they raise their puppies. The company's mission is to make the journey from puppy to adulthood as fun and stress-free as possible; however, achieving that mission proved difficult. By leveraging UserTesting, an experience research testing solution, Zigzag was able to identify key challenges and test a new generative AI feature that helped them.
Marija Dubrosvka-Hassan has been the Senior Product Designer at Zigzag for almost three years. When she joined, Zigzag had a subscription to the UserTesting Human Insight Platform that it used for some prototyping, but wasn't using it extensively. She began tapping into it to conduct regular user interviews, gaining a deeper understanding of customers and identifying areas for improvement.
One problem that was continually cited was that customers were hesitant to message coaches with questions they felt were simple or trivial. Instead, they would go to Google or YouTube and search for their answers. Dubrosvka-Hassan says this caused two problems for Zigzag, namely:
At the same time, the company was experiencing rapid user growth, especially as it expanded into the US market (Zigzag is based in the UK but has a global customer base) and was facing constraints with its human coaches.
The app had FAQs and extensive blog posts with detailed content, but through customer interviews it became clear that people wanted quick answers; they didn't want to spend hours searching and digging for them. The firm saw a gen AI chat app as a way to leverage their expertise, training, and the coaches expertise, recalls Dubrosvka-Hassan:
They [users] don't want to spend hours and hours searching through different kinds of things. They just want a particular answer to their question, and the ideal one is, yes, they do want the answer from the training coach. But they don't want to bother them because it's either too silly or they don't know how to ask. So, things like, 'Is it normal for my puppy to sleep this much?' But if you ask Google, you'll get so many different opinions. And with the generative AI, we know that we can actually put our expertise, our training, and our coaches' expertise into the AI and be that step gap. If you don't want to speak to the coaches, you can still ask some questions, and then if this doesn't help you, you can go and talk to the coach.
The decision was to create a generative AI chat feature called Ziggy. Ziggy would provide text-based answers to questions that summarized what to do (i.e., try this, practice this, etc). Ziggy could also provide links to more in-depth content within the app.
To help build the best AI chat, the firm conducted extensive testing, including prompt testing. It was known that customers didn't always know how to ask a question or start a conversation, so the top challenges customers faced when working with training coaches were collated and used to pre-fill different questions at the beginning of the user's interaction with the AI chat.
Using Figma prototypes, they tested features such as suggested bubbles and conversation starters for common puppy challenges. Users would see the interface, and the team would observe how they interacted with the different prompt styles.
What was found was that contextual, problem-specific prompts (e.g., "My puppy won't stop biting."), where the user could press a button and get a generated answer, provided a 100% engagement rate.
To alleviate any concerns about the content returned by the AI, human coaches extensively tested it to ensure the responses it returned were accurate and in line with their business, and they continually review them. says Dubrosvka-Hassan:
In terms of the prompt engineering side, I know that the prompt has been written multiple times to make sure that what Ziggy does is a part of Zigzag. So Ziggy knows what we stand for, what training we are using, what it can say or what it cannot say, so it has very strict rules on that.
This was the first time Zigzag built a gen AI app, and from customers' perspective, it was new territory. Dubrosvka-Hassan observes that customers didn't have a complete understanding of how Ziggy worked or what they could try, but in testing, 90% of users did express an interest in engaging with it.
One of the biggest surprises related to the value proposition messaging of the AI chat. Just under 60% of users showed interest in reading content that explained what the AI chat was and how it worked. This lack of interest created both engagement and compliance risks. Testing taught that users needed to be slowed down at the entry point to the AI chat. This was achieved by adding extra screens to help users understand the value of the AI and engage with it effectively. Dubrosvka-Hassan states that there are always risks associated with adding new screens to an app, but this approach worked well.
Dubrosvka-Hassan is the only designer on the team, along with the head of product and operations. There isn't a dedicated researcher to conduct user research. Instead, the UserTesting insight platform provides access to a large, diverse audience for app testing and prototype validation. Dubrosvka-Hassan explains:
This is where UserTesting's contributor network is always essential to us, so we can quickly test Figma prototypes, conduct moderated interviews, and uncover insights. I would say time is always our biggest constraint. Within UserTesting, I can get access within minutes to the users, and this allows us to move quickly during the entire UX process, putting multiple solutions in front of users, getting feedback, validating, and deciding on our final approach before we give it to our engineers to build it.
The results from implementing the Ziggy gen AI chat were positive. In fact, the results exceeded expectations. For example, there were 15,000 interactions with Ziggy in the first month, which was three times higher than projected. There was also a 27% decrease in human training coach support requests, which needed to to scale sustainably. Implementing the AI chat also resulted in business improvements, including a 10% increase in the registration-to-pay conversion rate.
Dubrosvka-Hassan is most proud of the 96% positive customer feedback received because it meant the firm is genuinely solving problems for puppy parents through this feature:
I think one of the important things that we always say about Ziggy and if we introduce any other AI in the future is it still doesn't replace our real coaches. It extends their reach. So, speeding up the answers for the user, but with simple questions, while our human coaches can focus on more complex behavioral issues that require their expertise. So yeah, AI is not going to replace our training coaches. It is there to help them because Ziggy knows if it cannot answer any question, or the question becomes too specific, it directs them to the training coaches.
Testing is not always a top priority for companies. It's time-consuming and, depending on how you do it, can cost a bit of money. However, when you target the right audience, the information you gather can save you even more time and money because you will create something your users truly want.
Zigzag's approach to testing is refreshing. They conducted smart tests and were able to gather insights that helped them create a gen AI chat feature their customers would actually use. Too often, companies introduce new features to customers without knowing how they will perform. If you do that with generative AI features, you're risking presenting an experience that is not only bad but gives the wrong information. And no company can afford to do that.