Episode Transcript
Speaker 1 00:00:08 Welcome to the rogue startups podcast. We're two startup founders are sharing lessons learned and pitfalls to avoid in their online businesses. And now here's Dave and Craig.
Speaker 2 00:00:19 All right. Welcome to episode 2 60, 1 of rogue startups. Craig, how are you this week?
Speaker 3 00:00:25 I'm good, man. I'm good. Just spent, uh, just spent a week in the U S and got back today. So I'm a surprisingly, not like terribly jet lagged, but, uh, I think by the end of this, I might just fall over on my desk. So you're getting me at the end of the day.
Speaker 2 00:00:41 Yeah. That, that trip back and forth is rough, especially it's worse for me coming back to the U S I don't know if it's, if it's the, the return leg, that's always worse because you're coming back after the whole, you know, the, the energy of going somewhere and you're Ooh, I'm there. And then afterwards, you're like, Ugh, I'm done. I'm going home. And that's where you crash. So for me, it's always coming back to the U S I don't know if that's the same for you going back to Europe at this point. Is that when you crash the most, or is it when you come to the U S because of the time zone snafu?
Speaker 3 00:01:13 Yeah. Usually go into the U S is really easy for me. It's just like a really long day. I'm really jet lagged for the next few days though, just being like, I get up at three in the morning, you know, um, and want to go to bed at like seven at night, so that's lame, but, but yeah, coming this way, I usually don't sleep well on the airplane. And so I'm just super tired when we get here, but I get over it in like a day. So, yeah. I mean, they're just different. They're just different. But yeah, this time the airplane was about 20% full. And so we all just freaking laid down, had three seats and slept the entire time. So actually feel really good. Yeah.
Speaker 2 00:01:50 Oh, nice. Yeah. That's uh, that reminds me of the old days of flying here. So now I'm going to date myself a little bit and say that, uh, I remember in the mid nineties, when I was doing business travel, there were many red eye flights where I had the luxury of laying down across three seats because there just weren't that many people on the flight, they didn't pack them in like sardines like they do now. So if you wanted to get some sleep, that was the way to do it. Everybody kind of spread out on the plane after the, the door closed and everybody could lay down and, and get better rests that way. So,
Speaker 3 00:02:23 Yeah. Yeah, exactly. Exactly. Oh,
Speaker 2 00:02:26 Geez. Yeah.
Speaker 3 00:02:28 How about you? How are things stressful?
Speaker 2 00:02:30 Uh, I have to say, you know, I got a lot of stuff, not, not stressful in a bad way. It's just like, this is, this is a high pressure month. And until the end of the month, I'm going to just feel like this ratcheting, increasing pressure going on. You know, first of all, it's the Superbowl of e-commerce this month. We got black Friday cyber Monday week. And, you know, the prep for that is always it's nerve wracking for me. I won't say that it's, it's super intense because really we're pretty prepared for it. It's not like it comes up and surprises us. We don't do anything really stupid. We don't do any major deployments or infrastructure changes beforehand. You know, we're already kind of in a, uh, a tentative lockdown period right now. We're just doing bug fixes. We'll probably lock down sometime next week, but it's always this, this impending sense of doom, I guess.
Speaker 2 00:03:25 And it's just, it's my paranoia from my days in an enterprise software. Like I remember when I did the Olympics in 2008 for NBC, and we were preparing for a crush of going from zero to 10 million users in one day, the day of the opening ceremony, because that's when they were going to open up the portal on nbc.com. And so, you know, we, we prepped every kind of scenario for this. We ran all the tests. I did all the scaling. I ran this, this scalability simulation here to see where it would fall down and we fixed every possible thing in there, but it still didn't make me any less nervous for when they, you know, started up the opening ceremonies and I'm sitting here refreshing the, the web stats page, they're watching it. So I kind of flashback to that because it feels a lot like the same sort of thing.
Speaker 2 00:04:21 And you know, this is our fifth black Friday. It's not like it's our first. And you know, our infrastructure is last year, it was in great shape. We had done all this elastic load balancing stuff, and it was totally everything went smoothly. I'll just say that, like, everything went swimmingly. We had no hiccups, some very minor UI stuff happened, but you know, it wasn't like infrastructure destroying customer angering. You know, it was a little, little tiny things and everything just worked great. And, you know, I have no reason to believe it won't just work great this year either, but there's still that sense of what if I miss something? Oh, hell yeah. And I can't, I can't ever shake that feeling. I mean, it's just the, it's the engineer in me, like sitting there, running through the scenarios, oh, well, what about this? What about this?
Speaker 2 00:05:13 You know, so, you know, it's one of those under promise over deliver, expect the worst hope for the best. You know, I'm definitely in that mode all month, but then on top of all that, we've got two other things kind of going on. We're trying to hire for some engineering positions right now, and I'm having trouble finding somebody that we're really, you know, Gaga about. And on top of all that I have a conference to go to in eight days and I'm a speaker and I ran through my material last night. And, uh, let's just say that it sucked, I I've got the talk. I have the slides I listened to myself and the talk and I'm like, this is terrible, Dave. I, you know, I, I fully expected it to be there. It's just, it's adding to the stress. Right. So, yeah, I, I, you know, I'm going to practice it five more times a day and I'll smooth it all out. And by the end of the week, I'm sure it'll be in a much better place, but you know, yesterday evening I was like, ah, man, this is rough. Yeah.
Speaker 3 00:06:16 Yeah. The first time you practice a talk, you're like, what was I thinking? Putting that in there and then, oh yeah. I mean, you, you, you say it out loud and you tweak the slides and you tweak the pitch and everything and yeah. By the end of the week, you'll be fine. I'm sure.
Speaker 2 00:06:28 Yeah. I think at this point, I'm in a good place with the talk. Like I think I have too much in the talk, like I'm trying to scale it down into the 30 minute window that I've got, they gave me 40, I'm going to try to go for 30, 35 minutes, give me some room for questions. But the thing that I'm finding is I'm rushing through the talk and I'm leaving out details and certain pieces of it. So I think, you know, it's good when you can cut, if you have to add, oh, that's worse, like right. Yeah. That I don't like being in that spot. So, you know, at least I'm in a point where I can like, okay, chop this. It's not as interesting. Let's expand this one over here. That is interesting. Right. So
Speaker 3 00:07:04 Right. Yep. Nice, nice. Well, I know that, um, like one of the engineering kind of roles or functions, maybe not a role, but like a function that, that we're going to talk about is, is like QA and testing and kind of how testing works, I guess, in both of our worlds and what we think of how it works in both of our worlds and like what we think we need to do to kind of get to the promised land. Huh.
Speaker 2 00:07:32 If there is a promise land I've yet to ever see testing lead me to a promise land. It's kept me away from some grease poles to hell, but that's about it. You know, I mean, with that said, I would like to say that, you know, I, I think there's some background about where we're at with recapture, that is some good information to feed into this discussion here. So this QA thing is coming up because last well, really a couple of weeks ago, Mike and I were having a discussion and we were talking about like, okay, we need to hire somebody else. We need to help. Uh, we need to get help on these various features here so that he can work on these big tech debt items that we've been deferring for a long time. And then it was a question of, okay, well, what is the bottleneck of all this process here?
Speaker 2 00:08:20 And then, oh, you know, between then and now we've had several releases where we've pushed something out and then all of a sudden, a day or two, or maybe a week later customer comes back to us and says, dude, this is fucking broken. And you know, Mike is a very diligent, careful developer. So, you know, I don't want to make this sound like this is in any way Mike's fault because it's not what we realized is that we've now grown recapture to a size to where the side effects of making a change are not easily discernible through simple testing. So we're now at a point where we realize if we make a change or a series of changes, we kind of need to run through like a regressive test suite before we push it out the door to make sure we didn't break anything else.
Speaker 2 00:09:12 And that's the part that we're struggling with now is that he doesn't have enough time to do that. Plus all the support plus the feature development and other things and be on top of it, like it used to work okay. Like that. And now it's not, and it it's really about the size of the project that we're at. And, you know, we've got on top of all that, you know, there's stuff going on with <inaudible> and yeah, I mean, we're just at a point we need somebody whose sole job it is to look at it, break it, try to make sure that we didn't break it. If there's something screwed up, note it, and then catch it before we push it out to the customers because bugs at this point have a very different optic than they used to. Yeah. Yeah. So, uh, yeah, I mean, that's kind of where we're at with, with our QA process here.
Speaker 2 00:09:58 And it's one of those, uh, you know, we needed to hire somebody in development to help us with the other features and things like that, but I don't have an unlimited budget here and now we're talking about this QA person. And so I'm like, well, do I get a QA person? Do I get a developer? Do I get a developer who's going to do part-time QA. Like, I don't know. You know, all of these things are now open questions because it very much impacts who you try to hire. Developers are not necessarily good QA folks and vice versa, good QA folks aren't necessarily good developers finding somebody who does both really well is like total fucking unicorn. And, you know, I'm already having a hard time finding somebody who's just like solid and communicative and fits in with the team in terms of like taking ownership and stuff like that. So yeah. I mean, this is, this is a real fucking challenge is what it is.
Speaker 3 00:10:53 Yeah. I mean, I'll, I'll tell you that, like I find the same challenge on the kind of like management and leadership side too, right? Like a good developer isn't necessarily a good engineering manager, you know? Um, and, uh, yeah, and I it's, so I think that like the, for me, like at the very beginning of the moral of the story is like a person that's good at a thing will not be good at a thing that's like adjacent to that, you know? So like a good developer might not be a good engineering manager or a tester or, you know, team leader, whatever. Right. And vice versa,
Speaker 2 00:11:25 Not necessarily, I mean, they could be, but you can't make that assumption is I think that's the important thing to say.
Speaker 3 00:11:31 Yeah. And they might not want to be. So like, I think that's really important to admit to yourself and to them. I think the other thing about this is a hunt cause, cause we've been through this recently, like in a pretty direct way. Right? Like, and it came similarly that like the plugin, like for us is, is the hard thing to test just because it's WordPress and it's all these conflicts and interdependencies with other things, like it's easier for us to test the Laravel app because it's just it's thing. And it depends on many less external variables. And what we said is like a hundred percent, the developers should not be the person to test this, talk to a few QA agencies. And of course their QA agencies. And they're like, yeah, for every two developers you have, you should have a QA person. I was like, that's of course you're a QA agency of like, so you're going to say that. But
Speaker 2 00:12:26 Of course, yeah. That's very much enterprise thinking there. And for some enterprise developers I've worked with, even that ratio is a bit low. Yeah. I mean, it just depends on the developer. Right. I've worked with some pretty terrible enterprise developers where they probably needed their own dedicated QA person. Cause they're that bad. Right.
Speaker 3 00:12:44 That's terrible. That's terrible. But
Speaker 2 00:12:46 I don't think for a startup that makes sense now.
Speaker 3 00:12:49 No, no. And I'll, I'll tell you where we have landed for the moment is it's not perfect and it's not done. But what we're trying to do is, is a couple of things. One is a lot more automated and like integration or feature tests, like in the code, you know, to have the confidence, to make changes without breaking things on like a very detailed code level that also, I think goes along with like writing really good specs, you know, or having really complete specifications or Figma drawings or whatever of the thing you're going to build. So that like user acceptance testing is possible without it being you, that does it. You know? And that's like, some of we ran into is like our previous processes. Like I talked to our designer, he builds, he, you know, he builds it in Figma and then the developers build it. And then I'm the one that has to test it because we don't have like all this documentation about like what a thing is supposed to do. So like if you're going to have anyone else test it, whether it's a developer or a QA person or like a project manager or whatever it has to be in writing somewhere for that person to QA against, because otherwise it's just in your head and they don't know like what right is, you know,
Speaker 2 00:14:09 And that is really hard because somebody is responsible for writing that down. And that usually ends up being you and me. Right? Yep. It's the founder because you've got a strong idea of what that looks like. And you know, to a certain extent, I know that Mike knows that as well, but if he's writing it down, that means he's not doing other things either. So he can't do both. And you know, so somebody's got to take the time hit and it would probably end up being like me to do that because I want to make sure that he's being productive and working on the development, which is the highest value thing he can actually be producing. Yeah. Yeah.
Speaker 3 00:14:43 And the last that last part you said is, is what we have tried to like use to guide all of this is like, what is the most valuable and like appropriate thing for each person to be doing? You know? And like, where we landed for now is kind of a mix of, so like the designer who's now like head of product is the one that specs us out. Right. Because he really has like the whole picture from me of what the thing is supposed to do and what these are supposed to see. He and our front end developer are the ones that do user acceptance testing for the Laravel app now. And one of two support folks are the ones that are doing user acceptance testing for the plugin. And I don't know why we kind of broke it out like that, but, but that's kind of where we landed is like had a product in front of designer or friend developer or the ones that are kind of saying, Hey, yep.
Speaker 3 00:15:37 This does what we said it was going to do in the product spec and in the drawings and stuff, both from like a design and a functional perspective. And then in WordPress, just because our support folks deal with WordPress so much with, you know, Sears, somewhat podcasting that like it's a pretty good fit for them to be the ones to kind of measure against what something should be. And they just have a standard checklist of like 30 things that the plugin should do. And they test it for every release as not, it's not perfect, but like,
Speaker 2 00:16:06 No, and it never the goal.
Speaker 3 00:16:09 And the goal is just like a big pain that I think a lot of development groups have. And, and I think we have it to some extent is like testing becomes a blocker really quickly because like we have several people committing code and then it just sits there on staging and people are like, Hey, it's on staging. I'm just waiting for it to be tested. Or I'm waiting for, you know, a PR review from like another developer. And like, if we can shorten that time, then like we get code shipped faster and it's less likely to be stale and all this merge conflict, stuff like that. I think that's what we're trying to solve for is like when a developer is something they think is ready to go, somebody needs to test it like within that half a day, you know, so that, so that things keep moving and the developer can, you know, put that down and move on to something else. Like that's, that's the big and shit doesn't break.
Speaker 2 00:16:59 Yeah. I mean, this is, this is the thing that's very frustrating about testing to me. So let me just say, first of all, I am, I'm definitely protesting and you know what we've done in recapture at this point, I think has worked pretty well as long as we've been a team of one or at most two. Cause we were for a while. Um, you know, there were two people actually doing the development, but one of them was kind of in charge of making sure everything was, everything was okay and shipped out the door. That was Mike being the senior guy in the project. But when you're talking about, you know, once something gets past a certain size, you just, you have to have somebody that's in charge of reviewing the whole thing. Cause it gets too big to keep into your head. At this point, this is the thing that Mike was pointing out to me is that it's now too big.
Speaker 2 00:17:45 He can't hold it all in his head. So when he's looking at something, he's not sure if I change this over here, is it going to impact something way over here on the other side of the product? And he used to be able to do that. And that definitely is something that you can do as a developer, until a project gets past a certain size. And I I've reached that point as well. And I didn't know exactly where he's coming from on that, but at the same time, you know, you, your main advantage as us as a single or small team startup is speed. And you're, you're cutting some of your speed by adding in the testing process here, because now you just talked about the bottleneck, right? And the bottleneck slows things down a little bit, and this is where I've seen other startups really start to have a boat anchor behind them where they're like, oh, well, I'm gonna, we're going to put this in.
Speaker 2 00:18:34 But then it has to be QA for two weeks. And I'm like, whoa, two weeks. And they're like, yeah, yeah, two weeks. Okay. And this is not, you know, I'm not going to name names on this, but the, these are companies that are smaller. They have engineering teams that are definitely less than a double digits. And you know, there's definitely a size at which your speed gets reduced. And you know, my fear is we're kind of hitting that size a little bit here. And I like the speed advantage that we've had up to this point. So I'm trying to figure out a way to preserve that without, you know, sacrificing the quality of the product, because that's another thing that we, you know, are known for here. And I don't want to piss the customers off because we're shipping buggy software. Yeah. So we're, you know, we're trying to talk about some strategies to do that.
Speaker 2 00:19:22 You know, one of those things recapture is written in, in JavaScript and converting things to TypeScript would make it so that we're less likely to write certain kinds of bugs because of strong typing in the code. But that means we basically have a huge amount of refactoring ahead of us. And we've got a pretty sizeable code base now. So that's like non-trivial and you know, I, we try to tackle a tech debt project once a year. I've already got a tech debt project this year. We already did a tech debt project earlier in the year that was held over from last year. And I'm not really anxious to do a third one right now for obvious reasons. Right. But there is some value in that. The other thing is to hire somebody whose job it is to deal with that. But then, you know, we've got the budgetary restrictions on there and the other things is to expand the unit tests.
Speaker 2 00:20:09 And that's something that you were talking about as well, right? And the unit tests get you certain ways down that path, but that will never go all the way on that path because it's the integration testing where lots of stuff really tends to bite you, especially when it comes to user interface, testing, like good luck doing unit testing on user interfaces effectively, uh, you know, I've watched a, a dozen frameworks come and go. Every single one of them claims to have solved this problem. Every single one of them was shit. Just plain shit. You know, you, you can come at me all you want on Twitter about that. If you think you've found the ultimate framework, I've seen two decades worth of attempts on this one here, nobody's really cracked this problem in my opinion. So prove me wrong and come at me on Twitter and tell me, tell me differently. Cause I would love to know that somebody has truly cracked this problem, but so far I've not seen it. And you know, that makes us all very hard. Right. So, yeah. Fuck. I don't know. Yeah.
Speaker 3 00:21:05 I, so yeah, we, we do a bit of unit testing and a bit of like feature integration testing. We do front end stuff with selenium, I believe. Um, and like, I don't know how good the guys think it is, but yeah. I mean it's bit like, so we started off with zero unit tests for about the first two years. And so we're up to like more than a third of the code bases covered at this point, which I feel like is a solid chunk. And we started by kind of testing the critical path. Like, can you sign up? Can you pay, can you create a podcast? Can you upload an episode, all that kind of stuff. And like that, all that stuff is totally covered now. And now I think it's the role that we have is like, when somebody goes to like add piece of functionality that touches this piece of code, or they go to refactor or expand this thing, like everything you do this from this point forward has to be tested, you know?
Speaker 3 00:21:56 And so we're hoping that that increases our test coverage to where we have the confidence to build faster and test less like have people testing less. Right. I don't know if that will ever happen, but I think something like you can probably attest to this better than I can. Like the guys say that like writing tests makes them write better code. Um, because they think of like the happy path and the sad path and all that kind of stuff. So like, I can imagine that just writing tests and know that you're going to write tests makes you, you know, shit better features probably like more high quality features.
Speaker 2 00:22:28 I think it can. I think there's a degenerate side of, so you're talking about test-driven development and there's a degenerate side of test driven development where people start testing shit that is just like absolutely a name, but they're trying to get like a hundred percent code coverage on their testing. And it's just like, why are you wasting your time on this? That's that's not a good use of time. If you hit 80%, in my opinion, you've kicked ass and take names, stop, stop fucking around at that point, that last 20% just do it with manual testing. You're better off. Honestly, you're not going to get the same return as you did on the first 80%. So even getting up to 30% is good. 50% would be better. 80% would be like the true ideal. But you know, I can tell you for a fact, we aren't there with recapture because we weren't doing our unit tests.
Speaker 2 00:23:17 Uh, gosh, I don't even remember when we stopped having to do the unit test. Cause there were some problems that we had to refactor something and then the unit tests were all broken as a result of that. And it might've been some module upgrade or node node modules change or some shit like that. But, uh, yeah, that, that just screwed everything over for a very long time. And we didn't have the time to go back and deal with that. So we didn't, and now we're in this other situation now, but I think this actually gives us a better opportunity to say, all right, what is the most important coverage here? Because some of the coverage that was in there before was kind of a name. So it didn't really have a tremendous amount of value. It was one of those, oh, let's just test, let's test all the getters and setters on this object here. Even though 90% of that is never actually exercise. Like that's not valuable right now. So I think you've done a better approach at that, you know, looking for the critical path functionality here, making sure your coverage is on that. Cause that's where your customers are spending 80% of their time. Right,
Speaker 3 00:24:18 Right. Yeah. But I mean, yeah. I, I think that the problem is that that stuff doesn't need to be tested as long as you're not adding new stuff right. In, which is what Mike is saying is like, when you have everything good and test coverage is whatever it is. That's great. And then you go add something and like test coverage becomes inadequate. And I don't know, I don't know how to solve for that. Cause, cause we definitely don't have that nailed of like when a developer goes to create a new thing, are they sufficiently adding, you know, code level tests to, to make sure that it does what it should do? I don't know. I, but like going back to like having someone own this, like first of all, I think it's important to have someone own this and it's not you, you know, and it's not Mike because both of you want the code to ship, you know, and sound like you're inherently biased too, to like test the happy path, you know, and just like find a way to make things work.
Speaker 3 00:25:17 Even if it's a little buggy or weird. But like I think the big challenge is like you said, you, someone has to write that spec. Then someone has to create a thing against which the test is done. If you're talking about manual testing and that's you, you know, like that, like there's, unless you have like a, you know, head of product or something that, that you kind of relay what you want to build and they go and kind of run with it from there. They could be the one to write the test cases. But that's the one thing I took away from like interviewing a couple of testing agencies is like, they're in all the product meetings with you and they hear you say, Hey, we're going to build this integration with convert kit. That's going to do this thing that Odetta. And they're taking all these notes and they're like writing out the test sequences that there, they know they're going to test against later. And like, that's the thing is it's got to be documented somewhere. So someone else can do the testing later. And I think that's the super hard thing for, for like companies our size.
Speaker 2 00:26:15 Yeah. I mean, we're definitely not an enterprise company. So you can't have like a dedicated QA manager or a product owner that was specifically thinking of that one thing and making sure that it gets documented and transmitted to the right person and so on and so forth. Like, yeah, we got a lot of hats to wear and that's just one of them. So yeah. Thanks, Craig. You really depressed me now. I've got even more work than I had before. Yeah, no, but I mean,
Speaker 3 00:26:41 This was good. It's it's like the natural evolution, right? Like the product gets bigger, the company is know more successful and you have these new challenges, like it's, these are I think the good problems to have. Indeed. I cannot complain. Yeah. Yeah. I probably still will, but it's going to listen to, so it's fine. But you know, Dave, I'd love to hear like what folks listening do around testing. Cause like I certainly don't know and I don't, it doesn't sound like you have like a hundred percent nailed either. So like I'd love to hear how other folks with like smallish teams handle like scoping and specking out a feature and then testing against that, uh, without like dedicated people for each of those. Um, so she just messaged podcast@roguestartups.com or hit Dave or I up on Twitter and tell us all about it. And just on the, on the feedback point, we've gotten a lot of like feedback and engagement from folks, you know, sending us emails or hitting us up directly in the past few weeks. And it's really great. So if y'all have any kind of thoughts or questions or comments or anything about this episode or anything else that Dave and I are spouting off about, please, uh, please get in touch. We'd love to hear from everyone. And uh, hopefully kind of bring that back into the episode here in the future. Thanks so much. And we'll see you next episode.
Speaker 1 00:27:51 Thanks for listening to another episode of rogue startups. If you haven't already head over to iTunes and leave a rating and review for the show for show notes from each episode and a few extra resources to help you along your journey, head over to rogue startups.com to learn more