XP Series Webinar

Man Vs Machine: Finding (replicable) bugs post-release

In this XP Webinar, you'll delve into the dynamic world of 'Man Vs Machine: Finding (replicable) bugs post-release.' Discover effective strategies and insights on navigating the evolving landscape of bug identification beyond the development phase.

Watch Now

Listen On


Jonathan Tobin

CEO, Userback

Jonathan Tobin

Jonathan Tobin

Founder & CEO, Userback

Meet Jon Tobin, an insightful CEO at the forefront of technology and human interaction synergy. Committed to refining and ensuring the success of software, Jon passionately advocates for the transformative power of well-crafted code. Guided by a customer-centric approach, he believes that customer voices are pivotal in shaping accurate and valued software. Beyond the business realm, Jon's diverse interests range from the founder's journey to the nuances of slow BBQ. Whether you're delving into customer-centricity, software excellence, or the art of barbecue, Jon Tobin offers a valuable voice with a wealth of insights.

Kavya Nair

Kavya Nair

Director of Product Marketing, LambdaTest

With over 8 years of marketing experience, Kavya Nair is the Director of Product Marketing at LambdaTest. In her role, she leads various aspects, including product marketing, DevRel marketing, partnerships, GTM activities, field marketing, and branding. Prior to LambdaTest, Kavya played a key role at Internshala, a startup in Edtech and HRtech, where she managed media, PR, social media, content, and marketing across different verticals. Passionate about startups, technology, education, and social impact, Kavya excels in creating and executing marketing strategies that foster growth, engagement, and awareness.

The full transcript

Kavya (LambdaTest) - Hi, everyone. Welcome to another exciting session of the LambdaTest Experience (XP) Series. Through the XP Series by LambdaTest, we dive into the world of insights and innovations, featuring renowned industry experts and business leaders in the testing and QA space. I'm Kavya, Director of Product Marketing at LambdaTest. And today, we have with us Jon Tobin, who is the Founder & CEO of Userback. I'm thrilled to welcome you to today's episode.

Before we delve into the software testing world, I need to make a proper introduction to our esteemed guest, who is a thought leader and visionary in the industry. Jonathan Tobin, CEO of Userback. Userback is a feedback platform for user-centered product development. And, of course, Jon is not your typical CEO. He's a forward-thinking individual who believes in the perfect blend of technology and human interaction.

But there's more to Jon than just the tech world. Beyond business, his interests span a diverse spectrum, from the founder's journey to the art of slow BBQ. Yes, you heard it right, slow BBQ. Whether you are seeking insights into customer-centricity, building software, or the finer details of BBQ, Jon is a valued voice.

Now let's take a quick sneak peek at what's on the agenda for today. We'll explore the intriguing dynamics of the age-old debate, Man Vs. Machine, in the context of finding replicable bugs post-release. It's a challenge that many of the software development world grapple with, and we are here to gain insights from Jon's wealth of experience.

Now, Jon, the first question for you would be, how do tighter release schedules and cost constraints affect internal teams in effectively identifying and resolving bugs before software release?

Jonathan (Userback) - Yeah, absolutely. Firstly, thank you for the introduction. I’m not 100% sure how much of a thought leader I really am. I definitely do have some thoughts, and hopefully, they do inspire. But I definitely love low and slow BBQ. So if anyone has any questions around that, I'd love to talk about it.

So in terms of tighter release schedules, um, and cost constraints, internal reduced quality, and then there's more reliance on automated testing systems. And sometimes, but maybe not always, the tighter release schedules and cost constraints could mean hiring less experienced engineers, and with less experience means there's more introduced bugs into production and then there's less time for developers to do testing.

So then we end up with the users actually running into issues when they're using the software and catching all of the issues. There are more bugs maybe be considered non-critical, and then they get placed into the backlog. And the issue with that is when we're prioritizing bugs that are actually critical, and then they get placed into the backlog, it can, I guess, change the way that product development is happening because we move on to the next project, and then there's still bugs sitting there that you know actually do need to be resolved.

So I think that you end up looking for tools to assist teams, and then you have more reliance on third-party technology as opposed to people genuinely spending time trying to find and resolve issues more thoroughly.

Kavya (LambdaTest) - Great, thank you so much for those insights. Moving on to the next question that we have, how do you tackle key challenges, especially in a traditional bug identifications methods, when it comes to pre-release and post-release bug tracking?

Jonathan (Userback) - Yeah, so traditional bug identification methods, they're generally quite time consuming and people probably tend to take shortcuts in that. And I guess most organizations want to do the right thing and spend time identifying issues and going through the right processes. Teams can sometimes create lists of bugs, which they get prioritized once I've got the list.

And then this leads to bugs potentially being lost based on the severity and that can mean that bugs don't actually get fixed in time. And as I mentioned earlier around non-critical bugs being placed on the backlog, the backlog then can fill up with bugs and then they're mixed with ideas and user requirements. So we end up with a backlog that is very messy because it contains everything in that backlog, having a more focused effort on streamlining the bug detection process or the testing process for higher priority bugs, a higher level of cooperation with the engineering teams for pre-release.

And in terms of the post-release process, having an internal SLA for bug resolution and clear communication with users post-release and avoiding putting bugs into the backlog clear communication point I find interesting because it allows you to continue supporting customers while managing their user expectations while you're still resolving the non-critical issues.

Kavya (LambdaTest) - Now, that's a very interesting point, especially the bit about sticking to internal SLA's. What is typically an internal SLA that organizations or users look at?

Jonathan (Userback) - Yeah, so I personally don't have a lot of experience in that area, but I have worked in organizations in the past where they do have an internal SLA where, for example, a blocker, which could be the entire system is down, or customers are unable to make a payment in our product, and therefore they can't use our product as it's a blocker and it needs to be identified and resolved within a two-hour time frame or a one-hour time frame or identified replicate and resolved within certain time frame right down to the minor issues which might have a seven-day Internal SLA and that might just be for identification and triage and then the levels in between that depending on the impact to the end user.

Kavya (LambdaTest) - Great, thank you. And you also spoke about communication being a critical part of the process. I think it is especially true when it comes to a large organization that is an enterprise kind of an organization where there are multiple different products and multiple different teams working on various challenges.

Just wanted to again, pick your brains on what are the key ways in which people communicate, especially when it comes to once the difference between once they identify the difference between ideas, right? How does the next process go on typically?

Jonathan (Userback) - Yeah, so something I'm actually really passionate about because my background or my professional background revolves around customer success and support teams and sales. And I guess from that perspective, that team, the frontline team, they're usually the ones that are communicating with the users around the issues that may not have necessarily been reported by that user, but usually are aware of that development might actually be resolving.

So if a user hasn't reported an issue, or if it rather, if they have reported an issue, keeping clear communication with the user, letting them know, you know, maybe that issue has now been moved into a development stage where the issue is now being worked on, or it's been resolved, but it's going to be placed into the next release, which is said in one month's time, depending on release schedules or a week's time and making sure that customer-facing team is communicating back with whoever's looking after the engineering team.

So the customer is, they're experiencing this now based on the result of this issue happening the customer is not happy because they're unable to do this task that they need to do in the software and just making sure that the communication channel from the end user who's being impacted by the issue is understood by the team responsible for resolving the issue.

Kavya (LambdaTest) - Yeah, no great point and great insights. Thanks. Thanks again. Moving on to the next question. What are the key challenges that arise when feedback is scattered across multiple platforms, which typically happens a lot often? Yeah. What are your thoughts on that?

Jonathan (Userback) - Yeah, you're right. It's, it's a common, uh, it's a really common issue. Most organizations of any size, uh, you know, have this problem, whether you're a, um, a five-person organization or have several thousand employees, um, in your organization, the, the challenge that organizations have is, um, there's a lack of visibility across the different teams and, uh, different teams without maybe realizing it, they collect feedback from their users in different ways.

So, for example, internally, a development team collects feedback from QA testers because they're going through and they're testing the product, collecting feedback through different surveys that they might be doing like NPS or customer experience, the customer satisfaction surveys and product managers, they run their surveys, marketing, you run your surveys.

Everyone's collecting feedback from different mediums in different ways, and when someone in the organization needs to make a decision on, let's say, in product development, we're looking to introduce a new feature or the product strategy is changing, and we need to make changes to the product. What are our customers saying about this?

And there's no centralized place for that team to go to generally to find out what the user sentiment is or what users have been speaking about in relation to that feature, how they use the product. And what generally happens is the person who's leading that project, usually a product manager, they'll then go and talk to each different department, and then they'll have to collect and consolidate.

The other challenge that's introduced by this is that as a marketing team who's been collecting feedback from the users through user surveys, generally speaking, what will happen is an issue will be raised from a customer because they'll say, Oh, and I was using the product the other day and this, this problem happened for me, and for that reason, I'm not very happy. And that can often get lost in the mix of, you know, receiving thousands of responses actually affecting other customers, but it's only known or surfaced by one team in the organization and maybe not the team that is ultimately responsible for helping those users or helping fix that issue. And because the information is siloed, it's not readily available to other areas of the business.

Kavya (LambdaTest) - Yeah, and I think another challenge that, you know, often teams face in these situations is also prioritization of which bugs to sort of prioritize, you know, what are the key challenges that, again, the users are facing and accordingly bring out the releases and so on, I suppose.

Jonathan (Userback) - Yeah, you're right. Because usually we listen to the loudest, the loudest person in the room. And the person that's, yeah, I guess screaming. So when there's a high-value customer that's screaming about an issue, we'll tend to prioritize that user's issue or the bug that's affecting them. Whereas in reality, that bug might actually be only affecting them. And the rest of our customer base is okay with that bug being there. So we'll spend resources and time prioritizing a fix on that issue and direct resources away from maybe other issues that are affecting a majority of our customer base.

Kavya (LambdaTest) - Yeah, absolutely. Moving on to the next question, Jon, how does utilizing user feedback for bug identification affect the speed of the development life cycle from issue detection to fix implementation? I think this is very closely related to what you were just previously speaking about. Yeah.

Jonathan (Userback) - Yeah. Yep. So in a previous organization, and I'm sure that listeners can relate to this, we had a piece of software that allowed each user that you would invite into the tool to set permissions for each user. And you could turn on and turn off the different features that your users would have access to, which meant that there was a wide range of different scenarios and different access levels that users could have in that software.

And that makes things really complex. And it actually, I guess it creates challenges for automated testing systems because there's lots of different variables in the software. It's super complex. It can't always be picked up by those automated testing systems and the QA testers that are going through because when the QA testers testing something, they need to set the scenario up.

I'm a user, but I have these five permissions and I don't have access to these other permissions. Now I test it again, I have access to these four permissions and not these other ones. And they go through and they run those tests depending on what access levels a user might have. So in this way, the people that are actually using the product, so your users, they're really great assets when it comes to testing.

And it's likely that they experience issues regularly, but they actually have no way of notifying you outside of contacting support. And given that I've had a background looking after support teams, setting up support teams and managing them, and then talking to customers around their experience with support customers don't like to contact support. They don't want to have to pick up a phone or send an email or go into a live chat or read a document because it breaks their workflow. They didn't go in to do that. They didn't sign into the product today to contact support. They came in there to do a task that your product allows them to do.

So giving users a really easy way to report issues as they encounter them, the bug reports, rich information, it helps speed up implementing fixes from the development side. You increase customer satisfaction because, you know, the user very quickly was able to say, hey, there was an issue here and there, continue on their way doing the thing that they signed in to do. And you reduce the back-and-forth between the support team, the developer, and the customer.

I guess one of the things that I remember hearing a lot in terms of phone support was the support person saying, oh, and can you let me know how you did that? What were the steps that you took in your account to replicate that issue so I can try and do that? And then when that's sent to the development team, here's a list of all the steps the customer took the developer comes back in that task and says, oh, well, can you please let me know what browser they're using?

And then the support person has to break the cycle of replicating and implementing the fix to customer. Hey, what browser is that you're using? And then back to the developer. And I don't know about, I guess yourself, but if you stop working on a task and then you're not working on that task for say a day or two, and then you have to go back into that task, it takes you quite a little bit of time to remember, oh, what was that?

Let me go back through it. That might be half an hour that you've taken and to reset your mind to start working on that again. And I'm sure developers experience this all the time where it takes them time to re-seat their mind back into, I'm fixing this bug, I've got the information. Okay, I remember what it was, I can continue.

Kavya (LambdaTest) - Yeah, so essentially, the loop between the user customer success team, the QA team, and the development team, right, that keeps on increasing and essentially creates a lot of lot more hassle at the end of the day.

Jonathan (Userback) - Yeah, absolutely.

Kavya (LambdaTest) - Great. Moving on to the next question that we had was, if you can provide examples of how user feedback integration affects the cost and resources allocated within a development team for bug identification?

Jonathan (Userback) - Yeah, so, doing our research, we identified that 38% of developers, they actually spend over 25% of their time fixing bugs and 26 over, sorry, and 26% over 50% of their time. So the problem isn't actually fixing the bug, it's actually gathering all of the required information to replicate and resolve the issue. And I'm certain that developers don't genuinely like resolving bugs or replicating issues.

In my tenure, I've only ever spoken to one developer who actually told me when I said, what do you love doing? He said, I actually love fixing bugs and trying to find bugs and fix them. So I guess with those stats, it means that either the QA tester or the internal team member an issue to support, and then that gets logged as a bug or a task in the project management tool such as JIRA, and the developer receives it. They start working on the issue, and like we were talking about before that loop, the developer can't replicate the issue. They ask for more information.

There's the back and forth, and that's extended by when it's a customer, as we kind of spoke about, then there's that back and forth between developer to customer, between support in the middle. And it just takes longer to fix the issue once all of that information is has been gathered. So using a tool, it actually doesn't necessarily matter who's logging the bug to the development team. The bug report is going to be consistent.

And it's the user or the QA tester internal team member can annotate the screen, contain any errors that might have been happening in the console at the time, and all of the session information like the browser they're using, even the DPI of their screen, because in some visual tools that may contain a WYSIWYG, like a CMS or an email designer, zooming in on the page can potentially cause issues with the way that works any custom metadata or even a session replay of exactly what the user was doing leading up into the point where they submitted the issue.

So basically, you're really just trying to speed up the replication process as much as possible. And the biggest cost of resources is the uncaptured, and this is actually a stat from the state of software code report by Rollbar. 22% of developers feel overwhelmed by manual processes surrounding bugs. And what's more worrying about that is 31% say that manual responding to bugs makes them feel frustrated. So it seems like a simple fix with a huge potential impact.

Kavya (LambdaTest) - Yeah, absolutely. Great. Moving on to the next question, what cultural shifts or mindset changes are necessary for development teams to fully adopt and use non-technical user feedback for bug identification and resolution?

Jonathan (Userback) - I do want to trivialize this question at all, but personally, I actually think it's a fairly easy mindset change. Um, and that's because the technology already exists to support this process. So effectively, you're able to quite easily transform non-technical users into an army of QA testers. And the key to that is providing consistency and maybe transitioning level of triage.

So the reality is that when anything is identified or logged by a non-technical user, someone is still there in the middle and they're able to review each issue, report or request and make any adjustments to collect additional information before passing it through to the development team. And this allows you to really have a gatekeeper there on issues that might get logged as a bug. And in software development, users will always log something as a bug. They'll say, this is broken, it doesn't work the way that it should. The reality is they're not all bugs and they could actually be a feature request. So having that gatekeeper in place lets you sort of triage appropriately.

And lastly, reassure the development team that it doesn't mean that they're going to be directly communicating with users because I think that's quite scary when an engineer receives an issue they think, and it's been reported directly by a user, do I have to communicate back with that person? What if I say the wrong thing? It doesn't mean that they have to start doing that.

Kavya (LambdaTest) - Oh, great. Really, really interesting. Moving on to the next question, how can development teams effectively blend non-technical user insights with internal technical expertise in resolving identified bugs or issues?

Jonathan (Userback) - If all things are equal in terms of the data being provided with the bug reports, so as if internal reporters and the non-technical users are reporting issues, if everything's equal and the context being provided is the same, I guess the internal team is always going to be providing maybe a little bit more information, but allowing the internal teams to add more context to the user insights before that gets escalated to development with having that triage step.

So you're able to better understand the user as the support person usually who's triaging you have a better understanding of the user so you can add some more context around what the user is saying and lastly on that I think the use the perspective of the user is important because the user's perspective is not always the same as the developer or the product manager or the internal team members perspective. Sometimes we tend to think about our products in very specific ways.

And we think that we're developing our product in this way because this is how we want our users to use the product. But the result is the users use the product in the way that they want to, or they think they need to. So having the user insight there available, it can help us make better decisions when resolving issues because they may actually relate to other areas of the product.

Kavya (LambdaTest) - Well, I think very interesting point, given how essential listening to users becomes, especially in releasing new features and in fact, even backtracking on features, right? I mean, if there is something that's not working, users would be the first person to sort of point it out and, you know, make sense that the teams should listen to them. Yeah. Great.

Jonathan (Userback) - Yeah, that's right.

Kavya (LambdaTest) - The other bit that stood out for me was, how it is very important for all these internal teams to also have, to also sync together. Of course, we had highlighted the communication part, but it does happen that the user is in touch with these internal teams at various points. I mean, they're interacting with the pre-sales team, they're interacting with the customer support team a lot of times they do go and be a part of, they are a part of surveys that the marketing teams bring up. So yeah, it is interesting how different non-technical teams even play a role when it comes to bug identification.

Jonathan (Userback) - One of the things that I've seen work well in some organizations is where they either have a meeting, whether it's in their daily standups and someone from the product team or the development team will join the standup of the customer-facing team or support, or they'll do a weekly meeting where different maybe heads of department are in that meeting. And it's really used to share sentiment

It's used to share issues that are outstanding that are affecting users. Not all issues affect users. But it is because the support team or the customer-facing team, they're the one with the ears closest to the user. The users will generally say, hey, I'm still experiencing this issue when is a fix going to be available? That meeting really allows the other departments to get a better understanding of how the users are feeling and what issues might still be affecting them because issues that the development team might see as non-critical or don't need to be prioritized, allow them to maybe reprioritize some of those issues.

Kavya (LambdaTest) - Yeah, absolutely. Thank you so much, Jon. Moving on to the next question, can you please share any insights on how this approach not only aids in bug identification but also contributes to an agile development process or a DevOps culture?

Jonathan (Userback) - I think by automating the collection and the delivery of the information of the issue to the internal teams with each issue submission or feedback submission, the non-technical users can give the internal, I guess, more technical team members everything they need to identify, recreate, and resolve the issues reporting and feedback tools, it really reduces the traditional, I guess, that investigation time by, you know, up to 70% from what we've seen here, at least at Userback. And it means that our teams can maintain that iteration and release philosophy. And that's really core to the DevOps philosophy.

So when a business tries to turn a non-technical user into a technical, I guess, feedback submitter or issue submitter for insights and feedback, it slows the DevOps process, and for any business, it probably should be avoided. But let the technology make it simpler and more frictionless for those non-technical users to provide feedback and let the technical devs do what they do best, which is, you know, they're there to code and not cross-examine users.

It isn't the user's job to deep dive into your product flaws. And if it's too hard, they won't provide any feedback at all. That's detrimental to product insights and future builds, I think.

Kavya (LambdaTest) - Great, thank you so much. This is very interesting to hear. Great, moving on to the last question that we have today. How do you foresee the future of using user feedback for bug identification and resolution, considering advancing technology and evolving user engagement behaviors?

Jonathan (Userback) - I know, so this is maybe a little bit controversial because we've been speaking about turning non-technical users into, you know, helping you identify issues and helping you fix and resolve issues and getting more feedback from your users. But we have this philosophy here at the user's back, which is that it's not the user's job to report issues to you. You didn't sign on a user to be a QA tester, they're there to do a job, and you should really help them do the job that they signed up to do. They're not there to be saying, oh hey, I've got this issue with this button or this thing's not working. Sure, you'll have the power users who really love your product; they're the product evangelists who will definitely do that. So I think that moving forward

There's so much data available, and there are so many tools available that tell us lots of things. We've got product analytics tools, we've got cross-browser testing tools like LambdaTest and we've got AI available. There are so many, we have so much data at our fingertips and I think that we need to start looking for opportunities to get on top of issues before they actually become an issue. And that's part of that in terms of bug identification is building deeper relationships with our users so that they don't feel like they may actually be reporting an issue.

And maybe it's prompting them to provide feedback along their journey using the product because our users are probably the ones that run into issues more frequently than maybe our internal teams because they're the ones that are using it to do the thing that we built our software for. And, you know, prompting them along the way, if we identify something in the data that may actually be causing an issue for our users, then being able to easily ask them, Hey, how's everything going? Is everything okay?

And because we know who that customer is, finding customers in our, um, user, user database that looks similar to that customer and prompting for feedback from them along the journey. And I think we can, one, gain better user insight as to how our customers are using the product and what they like and don't like. Often users will do one thing and say another thing. And, yeah, I guess it just provides a better customer experience overall.

Kavya (LambdaTest) - Great, amazing. Thank you so much, Jon. It has been a really great session. As we wrap up our talk on Man Vs. Machine: finding replicable bugs post-release, big thanks to Jon for his invaluable insights. Remember when we put human smarts together with tech wizardry, cool things happen in coding. So every bug you catch is like a hidden treasure. I think it definitely makes software get better. I hope I'm correct, Jon. Great.

So yeah, to our users, keep coding makes your own creativity with the power of technology, and bugs might seem like trouble, but pretty sure they're like a stepping stone to better software. And I'm very sure that the audience must have found your insights very useful. Thanks once again, Jon. Loved having you here. Stay tuned for more exciting and insightful webinar series on XPCDs.

Until next time, keep coding and happy testing. Thank you, everyone. Absolutely. My pleasure. Thank you, Jon.

Jonathan (Userback) - Thanks for having me, Kavya!

Past Talks

Client Feedback & Quality Assurance in Web Design for AgenciesClient Feedback & Quality Assurance in Web Design for Agencies

In this webinar, you'll learn about three pillars - web design, client feedback, and quality assurance crucial for crafting exceptional customer experiences in today's dynamic digital landscape.

Watch Now ...
Democratize Automation to Build Autonomy and Go-To-Market FasterDemocratize Automation to Build Autonomy and Go-To-Market Faster

In this webinar, you'll explore how democratizing automation is redefining organizational dynamics, cultivate autonomy, and go-to-market faster than ever before.

Watch Now ...
Testing AWS Applications Locally and on CI with LocalStackTesting AWS Applications Locally and on CI with LocalStack

In this XP Series webinar, Harsh Mishra, Engineer at LocalStack showcases live demostrations, advanced features, and provide highlights on how LocalStack Integrates with LambdaTest HyperExecute for Faster Test Execution.

Watch Now ...