RS355: Code Velocity and the Future of SaaS

February 25, 2026 00:58:05
RS355: Code Velocity and the Future of SaaS
Rogue Startups
RS355: Code Velocity and the Future of SaaS

Feb 25 2026 | 00:58:05

/

Show Notes

What does it mean to be a SaaS founder in a world where AI can build features faster than we ever could?  In this conversation, Craig sits down with Arvid Kahl to unpack the shifting reality of software, competition, and the craft of SaaS in 2026. They talk about code velocity, discernment, marketing in an AI-saturated world, and why SaaS is not dead, even as the software part becomes commoditized. Listen in to find out what actually matters these days, if you want to build something that lasts.

Highlights from Craig and Arvid’s conversation:

Resources and Links from This Episode

If you feel like Rogue Startups has benefited you and it might benefit someone else, please share it with them. If you have a chance, give Rogue Startups a review on iTunes. 

Do you have any comments, questions, or topic ideas for future episodes? Feel free to reach out to me:

Chapters

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: Foreign. What does it mean to be a SaaS founder here in 2026, with AI doing so much and kind of gobbling up so much of the landscape or the breadth of what it means to be a SaaS founder or to build SaaS products at all? It's a question I'm asking myself a lot, and honestly, I don't know the answer. So I went to one of the my best friends in the kind of SaaS and indie maker space, Arvid, Call, to talk about, like, how he sees this. [00:00:37] Speaker B: Right? [00:00:37] Speaker A: His newsletter is one that I read every edition of and he has some really insightful kind of thoughts about the heck we're all doing here. What are we doing as SaaS founders? What does AI mean to us, both in a positive and a kind of threatening way? How are we managing this, both from an internal mindset perspective and from a tooling and external and customer perspective? What does this mean for the longevity of a business? Great discussion with Arvid today. I hope you enjoy it. There's more than ever in the world. And what does that mean for those of us who have already created something that's not and want to keep creating that really valuable thing? Because I think I was saying this to one of our team members is like, what Claude Code and Cursor and all these things have done is give everyone a reason not to market their product, but just to build a bunch of crap. [00:01:39] Speaker B: That's right. [00:01:41] Speaker A: I don't know. I guess the reason I wanted to talk to you is what's the next three or four steps on that path? That's kind of the question, and it's [00:01:51] Speaker B: a mighty question at that. Because first off, I think the technology that we're dealing with here is just changing every day. Like there's just this massive stream of improvements and changes and new adaptations and new tools and ways to use new tools that anything you say today might be completely superfluous tomorrow because somebody has just released the next agentic whatnot and that now integrates that other whatnot and it's hard to even keep up with what is there. I don't think that ever happened to me in the past. Even with the plethora of JavaScript frameworks that existed in the past, all the Angulars and the VUE and React and all of that, maybe 10, 12 years ago, there still was a feeling of, okay, I know what's happening, and if something new is happening, I will hear of it over the next couple weeks and it will be fine, because things move rather slowly in this space. But just yesterday of as we record the French company that releases the Mistral model, they have now a Vibe coding cli and a code generating model. And now there's Google with their Gemini thing, and then there's the OpenAI's and the Anthropics, and then there's Quen and there's Deepseek. Like you barely get even by remembering a lot of them, you barely get to 10% of what's actually out there. To a technologist, not knowing what the cutting edge of technology even looks like, because even if you look at it, it is like blurry and vanishing from your site, it's kind of freaky. And to me, that impacts what I think my short term and my long term choices are because I'm now at a point where two weeks from now, my process could look completely different. And I have to be fine with that if I want to keep adopting this technology. So the question, what are the next three or four steps? Is, well, do you want to talk about the four steps that we were thinking about yesterday or today or maybe next week? Because those might look completely different. Great answer, right? Yeah, no instructive nature here. It's awesome. [00:03:57] Speaker A: I think, like, as you're saying all that, I kind of want to say if that's the case, then the tooling doesn't really matter. Maybe. Right? Like, if you're saying, like, hey, you know, Codex and Claude code and Mistral or whatever can all basically do like, like I. Oh, and a really good example of this is, and I hate to throw shade on them, but if you, you know, pull up Bolt and lovable and V0, it's exactly the same product. [00:04:26] Speaker B: Yep. [00:04:27] Speaker A: And you might say that about podcast hosting, right? Castos and Transistor and Buzzsprout all basically they host your files and they create an RSS feed and they send them out and they give you analytics. And I think that when that's the case, then maybe the thing you're worrying about is not the thing to worry about. [00:04:44] Speaker B: Yep. Yeah, maybe. I think so. I think that. [00:04:48] Speaker A: And I think that the re. The reason I wanted to have this conversation wasn't really to talk about technology because I think, like, you see, you know, Alex Lieberman and all these folks talking like, hey, it's just the easiest thing to do now is to just. I mean, like, just before we were talking, I pulled up Google AI Studio, which is amazing and free, and just gave it a one shot prompt for like our big feature initiative for the first quarter, and it did it. And I'LL just take this and give it to the team and be, hey, go build this just like this. So if that's the case, what are we spending most of our time doing? [00:05:23] Speaker B: And that to me is the much more interesting question, right? Like if you start your software development career in the past, as most of us have, right? If you start by building tiny little things and you get better by building slightly bigger things and seeing other people's things and integrating what they do by blatantly copying their code. You know, how we all learned how to build product is by just like going to Stack Overflow, copy and pasting [00:05:46] Speaker A: from Stack Overflow, Copy and pasting. [00:05:49] Speaker B: At least there is this kind of internal journey that you have as a developer or as a product builder, technical or not, doesn't really matter even if you click it together in webflow or other no code tools, at least there is this understanding of the process, of how a thing came into being that is happening inside of your lived experience, right? It's not right now where you just prompt and then you get the thing and whatever made the thing happen is happening externally. It was, at least to a certain degree, an internalized thing from which you could learn. That's really what it is. You could first of, it was paced at a human speed. You could take things in at your capacity, you weren't overwhelmed. And it was a gradual thing where you learned by mistakes, you learned by experimentation. And then over time you built this kind of capacity of skills that, that allowed you to do this a little bit quicker and a little bit better with the next project. I think that is something that a technologist, a developer, a product builder still kind of needs to be able to judge the quality of outcome of all these tools, be it lovable, be it a cloud code or have Google whatnot or GitHub's copilot. There's so many names, even examples get stale because there's just so much technology happening around them. But you need, and the main phrase for this is to be able to have discernment. At this point, Jack Ellis of Fathom analytics is tweeting quite happily about his experiments with cloud code and all the tools right now. And he has, like so many others, figured out that it is really not that the serious senior engineer gets replaced by these tools. It's the junior engineer that the tools kind of mimic. And then the senior engineer makes choice, a learned judgment kind of choice about is this good or is it not? And I always feel like we're now shifting in this world of all These many tools, our skill, from execution to discernment, right? From doing the thing well to understanding what a well done thing looks like. And that to me, works in every process. It's for writing too, right? The way I write my podcast episode, my blog post is I just start brainstorming into a microphone and I record it, get the transcript, throw the transcript into a Claude or whatever window with a prompt that describes what I want to have out of it. Out comes a script for a podcast episode that I then work through and see does it make sense? It's built on my own words, but it's kind of turning them around, making them a little bit more, you know, cohesive. Because my brainstorming is going all over the place. And then it kind of condenses my wild creative blah back into a shape that resembles something meaningful. And I then record this from the script that I adjusted. And that to me means I need to see that the output that comes from whatever compression tool I use on my random input looks good or not. I still need to be able to see that, right? I'm not going to take the script and read it verbatim. That would not make sense it mine. But what makes it mine is saying, okay, this paragraph looks perfect. Kind of what I said. Anyway, this one looks a bit wonky. Let's put some personality in there. That's for writing, that's for coding. That's for writing cold emails to people, right? That's for all these little steps that we can take using these wonderful generative or analytical AI tools. We still need to have the discernment. And if discernment is the thing you need, well, you only get to discernment by developing taste, by developing understanding. So you can't skip that part. And I'm quite afraid for the next couple years where people are replacing all their junior developers with these tools completely, and then they notice that our senior engineers are escalating the career ladder. We need new seniors. Well, those seniors were the junior developers that you didn't train. That's the problem. I feel, I feel we're taking the tastemaking out of the, of the actual, like the craft. And to keep that, honestly, I don't really know how we can keep that in if a tool can one shot a solution like this. I had that yesterday too. So for podscan, I was building podcast similarity by effectively encoding all of my 4 million podcasts that I have, all this information on demographics and analytics and episode contents, themes, topics. I have all this data and I'm now encoding Them in like roughly 3,000 dimensions as a vector storage. So there's just a lot of stuff going on. I found like AWS. There are there S3 vector storage buckets on our thing. So I built. Well, I didn't build it. I brainstormed what I wanted this to look like for like half an hour into Otter AI, which I use for all my brainstorming because they're really nice with the transcription and stuff and they're on my phone, they're on my computer. It's really easy to use. This is not a paid advertisement, by the way, but it's just a tool that has stuck with me. And if an AI tool sticks with you for like two or three years, this tends to be a good solution, right? [00:10:49] Speaker A: So, yeah, well, it was not an AI solution before. [00:10:52] Speaker B: That's exactly right. [00:10:52] Speaker A: That's the only way, right? [00:10:54] Speaker B: And that we might want to talk about this too, right? That the kind of AI native versus AI just, you know, progressive tools. But I dictated in there like 30 minutes. I talked about how I wanted it, this to work, how I thought I would build it, right? I would use this AWS SDK and I would like make this call and I would encode it like that. And I didn't really know much about vector storage to begin with, so I just like said, and here, you'll just fill in the blanks, you'll look at the documentation and you'll figure it out. I took this prompt, I let it sit for a couple of days because I had other things to do. And yesterday at 2:00pm, I was like, okay, this is. It's time to see what happens. So I checked out a development branch on my local computer. I took the prompt, threw it into cloud code, let it rattle for 20 minutes. I came back, I tried it, I tried to, you know, it came up with a couple commands like, here's how you set up the buckets. Here's how you import your data. Tried to set up the buckets. First error message was like, something didn't work. My AWS SDK was the wrong version. Had to update that, then tried it again. Then it actually connected to aws, but said your user doesn't have the right privileges. Here is the inline JSON that you need to put into your user permission thing to be able to access it. Put that in, applied a check that it was sane, obviously applied it and then it worked. And then all of a sudden, like I had done nothing but dream up this thing that I wanted. My, my podcast database system was now encoding in the background whenever something was updated with a podcast, these 3000 dimensions of a vector and storing them into three or four different vector storages and then at the same time pulling in, like, podcast proximity, other shows from that data and persisting them into the database. And now I have this data that's just accumulating right now in production since yesterday at 2:30 or 2:35. That was done after I started doing this M2. And it would take me weeks, if not months to actually build this myself with all the thinking and the testing and the servers and the abstractions and, you know, and I was like, that's just not okay. You know, it is really cool. It is incredible how fast we can build this, but it makes. It makes us so incredibly capable of building features so fast that we lose the kind of the understanding of our code base. Because I had to go through this for like 2 hours to understand what it actually does after it was already working in production. That says more about me and my deployment style. But it's hard to maintain the comprehension of a system like this if a system can change so significantly in minutes and work. Right? [00:13:36] Speaker A: Is that. Yeah. So that's amazing. That's amazing. Like, I have. I have two things to say about that, I think. One is there will be people listening to this who don't use things like Claude code and don't believe that what you're doing is reasonable for a production environment application with hundreds or thousands of users. I think, like, I'm 45. I have gray hair. Our business has been around for a while. I think from a mindset perspective, one of the most important things is for me to believe that those people are wrong and crazy and that plug me into the matrix. I want to go as hard into all this stuff as I possibly can, because if I don't, I'm going to get run over by you who are doing this. So I think for everybody out there, and I'll talk to some people on my team right now using Copilot and believing that it is the AI solution for coding. You're wrong. Right. And, like, get on board with the thing that's going to take 80% of the work off your plate, not 20. [00:14:43] Speaker B: I was, I was talking to a dear friend on the weekend who is not technical at all, but he has an app in the iOS App Store for a very specific, like, health niche thing in the German App Store and is now expanding to the global market. It's already paying his rent and everything. It's a tool that he built Completely by going back and forth in ChatGPT in the browser window, pasting code and getting code out and copying back and forth between Xcode and ChatGPT. So even that facilitated a non technical. He doesn't know at all how to code, but he learned it by just trying to figure it out with the chat. And that is already your competition. Right. I'm a non technical person just willing to, to go through the shoveling stuff back and forth and copying, pasting and hoping for the best that is the new stack overflow, copying. Right. And that will eventually. I introduced him to Codex because he's an OpenAI person. So I said might as well try Codex. And he's been installing it and now been experimenting with how to prompt and learn how to prompt like more correctly for in terminal self agentic systems, these kind of things. But it's already that, that you're competing with. And I understand completely how somebody would say, well you have no control over this code, you don't know what it does, it has side effects you might not understand. And I'm very aware of this, which is why I also started for the very first time in my life doing test driven development as well. Or let's just say adding tests to the code base. Let's be real. Right. A test driven development would be a different thing, but AI can help with that as well. It's just again something where you need to discern is this test actually testing what they wanted it to test. But the partscan code base is now significantly tested. Like there's 4,000 tests in there that actually test for things to work. And whenever a new feature is added like this, like a massive one, I still run the whole test suite and have it add more tests for the new thing. Right. So you can argue that you might lose control a little bit, but like anything in software engineering and building products, it's about orchestrating the actual business, not just writing code. Right. And a good code base has to be tested anyway, so doesn't matter who writes the code, you need to make sure that it doesn't explode the rest of it. [00:17:07] Speaker A: Yeah, yeah. I mean I think for me and like I'm, I'm that guy. Like we're introducing a new product, it's called linkberry. It actually does the exact same thing you're talking about with your podcast for LinkedIn. You talk to it, it creates LinkedIn posts. And so the fine tuning of the taste and the output is really the product. Right. Because the tech of it, like I built with Claude code And it's awesome, it's really good. But the thing to me at this point is I'll call it Code Velocity is the most important thing. It's not understanding the code or crafting the perfect solution. It's just, can you throw enough compute at this for it to be done first and then right, and then efficient, secure, maybe? You know, like, I don't believe that you as a developer, understanding the whole architecture of the thing is, is important anymore. And I'm like, not a developer. And so I'm sure the developers are yelling at their windshield right now, like listening to this. But like, I just don't believe that's the best use of your time anymore. Because you can just fire up an agent and say, hey, run through the entire code base, give me like the map thing or whatever, or write, you know, tests or do a security scan or act as my cto. I just don't think it's the most important thing for you to focus on anymore. [00:18:26] Speaker B: Yeah, two things, right? First off, you're not an Airbus software engineer. Like you're not building the next, and if you are, please do not listen to this, right? Or listen to us and do the exact opposite. But you're not building like you're not a rocket surgeon, right? You're not a rocket engineer. You're not, you're not building like minimally invasive robotic surgery tools. If you do, please do not wipe code or please do not use AI assisted tools without like a proper human process. But again, like most of Us are building SaaS that are if at all non critical at the minimum, right? I mean, you're building a podcast hosting company, there's a lot of money on the line. I guess there's some criticality to the service being up, but the specificity of the features or if there's a bug somewhere, you'll fix it, right? It's not that a bug will bring down your business or if it does, it's down five minutes. People have trouble downloading their podcast and then they hit refresh and then it works again. It's fine. So Velocity becomes a much more interesting thing and as it is sped up, up so much through AI tooling, I think it would be, yeah, it's almost like a self inflicted business damage if you do not embrace it because your competitors will, obviously. But all competing solutions also do. I think that's what people often miss, right? It's not that for you, maybe not the best example, but for any tool that takes some data and generates a report or whatever is Always competing with Excel. It's now competing with OpenAI where you just throw your, you know, Excel sheet in there and you get the report done by, by the AI itself. You don't even have to go to an external SaaS tool for these things anymore. So the competitive solutions that exist, the competitive tools that people use may not only be your SaaS competitors, they may also just be chatgpt at this point or cloud or whatever. Right? So that is who you're competing with. So the baseline is that you have to be at least as good as them or better in a way that these tools cannot kind of replicate because they have to be generic where you are the more specific thing. [00:20:29] Speaker A: Right. [00:20:30] Speaker B: You always have to stay on top of that as well. And if this technology is being used by, you know, people who are completely non technical, who think chat is a person or whatever, but still use it, that's now how you have to think about speed of development and speed of features. Unfortunately, I would love to be able to sit there and craft this perfect super stable, hyper optimized solution, but I don't have to or I shouldn't. Maybe that's the better way, right, to phrase this, because if I do, I'm missing out on the things that actually might move the people that use this forward instead of me being proud of my infrastructure accomplishment. [00:21:09] Speaker A: Yeah, I want to read a tweet by Jesse Tensley. So Jesse's the guy that bought was it gusto or not gusto? The tax solution last year, right? It bought, bought several companies, like pretty big companies. I think he's on the list to buy Tick tock. So he says everyone is talking about AI, saying SaaS is dead and they're dead wrong. Want to build a billion dollar company in 24 months. The biggest winners will take legacy SAS companies and turn them into AI first companies. Why legacy SaaS? 1 niche training data 2 refactor everything 3 capture instant distribution in revenue and customers and 4 skip years to build and capture market share. A bunch of vertical category winners right in front of everyone can buy them at 1 to 4 times ARR to cash. Meaning their AI competitors in air quotes trade at 30 to 50 times. Can anyone say arbitrage? Traditional VC and PE worlds are colliding and everyone is sleeping on it. Mainly because the new category and 99% of investors prefer safe routine investments, not new category builders. But in a decade it will seem obvious. Buy, integrate, refactor, arbitrage. [00:22:30] Speaker B: I mean that sounds credible and realistic because we know so many of these like niche Incumbents. Right. That just for. And that's a process thing as well. And maybe the velocity part is really important here. Right. If you go into any old school company, their process is just slow. Even if they have like scrum or whatever internal process they might have that in itself has a speed limit built in. Because in this company we have our retrospective meetings every second Thursday of the month or whatever. Right. That just means that you can only get so much done because you can't do it retrospective before everything is finished. And then that's also how you scale the points, whatever you use. Right. There's a certain speed built into the process which is now completely destroyed by using AI tooling. So if the process is important for the company to sustain itself, because payroll and the internal roadmap, marketing roadmap, aligning with the actual product roadmap and all that, if that keeps people from being fast, yeah. Then bringing in a completely new paradigm is probably a really good idea. And that is very interesting. I wonder though, if there is still a reluctance in certain markets for AI adoption. I know a lot of people on the consumer side of things, they're happy to have a chat with AI, but they don't want it touching their data. They don't want it touching their, you know, the things that previously were done by humans. They're really reluctant to do this. So this might be true for some markets, maybe most, but not all. I don't know. What do you think about this? Sure, yeah. [00:24:16] Speaker A: I mean, even like on our team, our employees, my employees, our team members, I should say, have questions for me about this all the time. Hey, I suggest they use Grok for certain things. Hey, you use Grok. It's the fact checker. Like that's how I use Grok. Uh, and so, you know, one of our employees yesterday was like, hey, I have a blog post I want to like run through Grok. Is it okay on the free version? Like fuck yeah, it's a blog post. Like it's going to get scraped anyhow. Um, but, but I think like, even for us, that that's a question. It's like, hey, on like free or paid tools, what kind of data can I send? Like, I think on a paid version you can send whatever you want. Yeah, like we don't have anything that's so proprietary we can't send it. Like not customer names, but basically everything else. Yeah. So, so I think that like there are industries, Medical is probably like the last one who will be a little more hesitant to, to. But, but I think in certain aspects, medical is going to be the first to get really disrupted, like radiologists or whatever. Right. Like so, so I think it's, it's more nuanced than just like whole industries. It's like certain applications and, and kind of niches within an industry. Yeah, I mean I think that with respect to this like AI, if I a conventional company, if there's two angles that like I read this tweet this morning and really like took a step back. One, like acquisition is a strategy that we're pursuing. Right. For growth. Like we believe that we can build a portfolio of kind of creator companies. So podcasting, you can imagine like what's adjacent to it. Email, website, social media, YouTube. Right. Like so we're going to build some, we're going to acquire some. I would very much rather buy a company with a thousand customers that's not super AI hip and like the person built it by hand and it's in like some kind of crappy technology and we can just plug that thing into cloud code and it just like AI ify the whole thing. I'd much rather do that than buy the AI forward company just because of like the economics of it. But I mean maybe more importantly, I think about my business and you probably think about your business in the same way is like how am it's a spectrum but like how far along the spectrum of like AI forward mi. Yeah, because like you want to run POD scan for like a long time I think. And I do too. Like how. [00:26:48] Speaker B: What does it even mean? [00:26:49] Speaker A: Right. [00:26:51] Speaker B: When I think AI forward, I think mostly from a technical side of things as like leveraging AI AI for things that it's good at, not necessarily just putting a chatbot on it, which is what the AI visually forward might mean. Right. You can check with your data or whatever, but like how much is a reasonable amount that leverages what there is without being super expensive, Prohibitively expensive being just there for the sake of having AI, which is another problem, right. Like you want to have AI features that don't make sense. You could easily built these, you can come up with all kinds of things but people wouldn't use them. How much is. Or maybe how much is too little? How much is like leaving the low hanging fruit untapped. I think that may be the biggest problem at this point because obviously AI is really, really useful for certain things. Like if you have a chat or a customer service chat and you want people first level to get as fast support as possible and you have a knowledge base, well obviously you're going to [00:27:52] Speaker A: turn on the little. [00:27:53] Speaker B: Suggest the best article for the question. Don't talk to the customer, just say, hey, these are the top three articles that might help you at this point immediately before somebody gets to help you. Like that, to me is like the minimum useful, but also would suck if you didn't have this kind of feature in a little chatbot window. Which is funny because I don't have it turned on. But very different story. But these little things, right? That's for every business to decide. Do we use AI as a full chatbot that tries to solve their problem? That's probably going to suck. And they're going to keep yelling like person, person, person at your chat window. Or are we going to use it this tiny little bit to give people more agency in that moment themselves? Or are we not going to use it at all and hope that the human person is fast enough to then search and pull up the article? That's already a decision that you need to make on that particular level. And then it's like, well, now we look at it from the data level in our business. Do we use AI to. And that's what I do a lot in potsc. Augment existing transcripts, right? I've pulled a transcript or I pull the audio, transcribe it, and then I'm like, who's talking? What are they talking about? Is this safe? Is this. I just recently included this garm. What is it? Audience safety framework in there. Like brand safety framework. So it can now say, okay, this hits the safety floor. People talking about terrorism, you might not want to advertise here. Or this is perfectly safe. That just, you know, making jokes and it's fun, right? That I'm trying to extract as well. And all these little things. Super useful, right? Because if you have an AI scan a whole thing along these 11 categories and tell you low risk of, you know, terror conversation. But people are using expletives a lot. That's great. Now I can show this to the people that use my data, and that is data that otherwise I could have never gotten unless I had people do it manually themselves. And they might even have missed something that the AI might have caught, right? So that's to me, a good useful AI use in the background. But do I have something like chat with this podcast? No. Like, why would I, right? Like that's some other tool that somebody else can and probably should build on top of my data. So they should be a subscriber, clearly. But, you know, like, there's a ceiling to things there's a lower ceiling and there's an upper ceiling for each business. What AI can and can't do and what it should and shouldn't do. The hard part is to figure out first of what is capable of, what could it potentially do for me and how do I get it to do it at scale in a way that doesn't mess up the quality of my data. [00:30:24] Speaker A: Yeah. [00:30:24] Speaker B: Because it's still mostly guesswork that AI does. Right. When AI looks at a thing, it's all kind of guesstimates anyway. [00:30:31] Speaker A: Yeah. And I think to take it to the marketing side of things is that lower floor that's been the playing field. That's not the right word, has been leveled for everyone. Everyone can do baseline marketing, which means, like baseline marketing is no longer effective. And I think that's like, maybe I'm a shitty marketer because I think that's. That's been the biggest. The technology and the product like that. That's all fine. Like, our product has basically been complete kind of for a while, like that. We're not in new product mode with Castos. We are with Linkberry and like that. That's been great because we've been building it really quickly. But I think one of the biggest challenges is actually on the market, like go to market side like sales and marketing, where like everyone can do pretty good content, can do pretty good SEO, can do pretty good social fucking nano banana. Everyone can create pretty good images. Like, I think that's the biggest challenge, like the biggest unlock is on the product side and maybe, maybe like leveling the playing field and getting to that base level of marketing. Now that those two things have been solved, the big challenge is like, how do you stand out? And I don't have a good answer for that. But I think that's the big challenge. Because [00:31:53] Speaker B: likely the same way that we have in the past, maybe, but just even more importantly with taste. I think we talked about taste making as a learned thing through struggling through understanding a thing. That's how you develop taste. And I think it's the same about standing out. Like if you are a brand today, well, you need to have, what is it, spiky point of view. Maybe that or at least a discernible, funny discernment is there again, discernible shape to your brand. Right. You can't just be the looks like tailwind purple CSS landing page at this point because that is the default that AI creates and that is the default that an average developer would use to create. So you need to make almost a Taste choice, like, who are we? And I've been thinking about this a lot with potscan too, because Part Scan could easily be just yet another API product or yet another social monitoring product. But like, I was like, well, I want to beat the most product. I want to have the most data on the most podcasts and the most transcripts and the most that maybe not like the fastest or the hyper established in certain industries, but I want to have as much data as I can and I want to make that my thing. And other companies will do other things and make those their things. And that's how we differentiate, even though products can be quite similar because it's so easy to get to parity between them. So, yeah, it's a weird thing because you still need to look at what your competitors are doing in terms of advertising, in terms of marketing, and do the exact same thing to reach the same people, but the same locations. You still need to like, functionally do the same thing. You need to do LinkedIn ads, you need to do paid Reddit ads. If your competitors are seeing success with these things, then you need to have a share of that pie. But what you do, that strategy, kind of the brand strategy, becomes so much more important because that's the only way for you to be an authentic human anything in this very automated world. Yeah, that's the hard part because nobody knows how to do this. Right, Right. [00:34:00] Speaker A: And I think the challenge is like, we've, we've all kind of gotten a little dumber with like, AI, Right? Because we're taking the critical creative thinking off our plate to a fair extent and just like rambling for 30 minutes into Otter and then it goes and makes the product like. [00:34:13] Speaker B: That's right. [00:34:14] Speaker A: Yeah. I mean, it's, it's, it's this like, you know, you know, we're, we're becoming. What is it in the. What is the movie [00:34:25] Speaker B: where. [00:34:25] Speaker A: Where like, they're on the spaceship and the people all get fat and like drink smoothies all the time? Like Wally, right? Yeah, Wally. We're becoming like, we're becoming Wally in our little space suits and stuff. [00:34:39] Speaker B: Yeah, yeah, yeah. [00:34:42] Speaker A: So, so, like sass is not dead, obviously, [00:34:49] Speaker B: but I want to throw another one in because SaaS being dead is also. I thought about this a lot last week. I think I may have talked about it on my podcast or the next one. I don't know. I record a couple of weeks in advance, but I thought about exactly that phrase because I heard it so much from people who I thought knew better. But to me, SaaS was never about the software part. Like, it's software as a service, but software doesn't really matter. It's always the as a service that matters in that phrase. And SaaS is, I built a software tool that I know that everybody else can build, like, even in the past, like people could have built. People have built other podcast hosting platforms, other podcast analytics platforms. They exist, and they existed in the past. But the as a service part, the thing that keeps it running, the things that adapts to changing regulations, adapts to new technologies, you as a SaaS founder, you build one thing at one time. That's the software. And then the as a service part is just like the other 80%, right? It's like constantly updating, being secure, secure, making sure new integrations work, finding new customers, giving customers what they need because their process changes. It's the constant adaptability. And that's the service part, that's the business. And I think AI tools and all of that, they're really good at building the software part, but they're horrible at building the as a service part. Like, no AI tool can ever generate a business for you. They can only generate a product. And the product is a thing. In time, it's an artifact, quite literally, because the AI forgets completely what it did. And as after it built the thing, like yesterday, when I built my little tool or whatever, I had it, as it had still the context of this whole vectorization thing, I had it write a massive documentation of what it did, why it did it, what technologies it used, to what extent, and it's a gigantic markdown file that now lives somewhere in my code base. So that whenever I need to touch this again, I get at least a glimpse of what the AI was quote, unquote, thinking as it was building it. But that understanding, that comprehension of my code base, that was gone the moment I closed my cloud code session, right? I probably can resume it. I know that there's some kind of, you know, history and memory and all that, but effectively that moment is gone. Because unless I make a strong effort to try to get back to exactly that moment, the moment I do a couple more things, I'll never get back. And that is problematic, right? Because if we can't build this understanding, then all we have is this artifact, and we can look at the artifact and try to figure out, well, what was the thing thinking as it was building it? But if there's no documentation or no way to restore exactly the thinking model that the AI had, then this is just one thing. And that's the big difference. Because as a SaaS founder, a traditional legacy SaaS founder, you understood exactly what your code base was doing. So if some customer came by and said, well, now they just recently introduced this new file format in one of my tools, one of my suppliers, can you support this? You're looking at the file format, you see, okay, this looks kind of like the other thing, but slightly different. Okay, sure, I'm going to build you an integration here. But if you give an AI this new file format and a code base that it has never seen before, right? Or only has glimpses of, it's like, okay, I need to build an external service provider for this. And it starts building all these complicated things without really knowing where to integrate it because it doesn't have the solid understanding of the history of your product. So every tiny new thing you build is just going to make this so complicated that at some point it's convoluted and you don't understand it. So that's the business part, that's the service. The service that you render for somebody paying a couple hundred bucks a month. That's not that you give them software, it's that you make the software pliable to whatever business case they have. Constantly, over time, you keep it running, you make it better, you release new updates they don't even see in the background, you know, for optimization purposes. All of that, that's the service part of SaaS. And that is not dying because that is not replicable, at least not at this point. Agents, they're doing stuff, but it's by far not as much as a human founder with a full understanding of the full scope of a business could ever have. [00:38:52] Speaker A: You touched on agents, and that's where I wanted to go next. What does an agent mean to you? [00:38:58] Speaker B: So the movie Matrix. There's Agent Smith, right? No, to me it's what kind of is, because they all look alike. To me, an agent is kind of this faceless arbiter of trying something out, experimenting for you, and then showing you the results and hoping that they work. It's probably not the best description of it, but to me an agent is like kind of a self sustaining loop that is governed by a couple of rules. Let's go for a technical definition here. And the loop is constantly rerun until the goal state is achieved. It's funny, like in my Claude code I have. What is it called? I think Oxter. There's a company called Oxter that have this kind of augmented coding and they have a system prompt that is quite substantial. I think it's like 200 kilobytes of text that is just like XML that describes what this loop looks like. This kind of agentic loop. Like you have these goals, you use tiny little steps you make to dos and then you execute them, and then you review them, you verify them, and then you explain them to yourself. You persist that, then you explain that to the prompter. It has this all kind of figured out and written out. And whenever I start a cloud code session, I tell it to confirm that it is the oxter pulling in this whole loop. So Claude code has its own internal loop. And in that loop I have another loop that is the super well defined agentic loop. And that loop is then constantly executed. I give it a prompt and I say, until this is done, keep looping until you have it. And then the agent goes and it looks at the code or it reads the file, it sends that off to an AI to understand, it gets it back and with that understanding reads another file that the AI told them to maybe now look at. And then it sends that back to the AI and back. And then maybe it writes something to the file, a change tells me to either confirm it or is allowed to do it automatically. Maybe it runs a program locally, just like reading a file. It would run the file, see what happens, maybe lint it, maybe execute it. That's what the agent does. The agent has a task, a goal state, and it starts at a non goal state. And then it tries to loop for as many times as it needs to to reach the goal state and verify it. And then it's done. [00:41:12] Speaker A: In my YouTube videos, I always had. That's much more. That's much more detailed than I. My, my definition was an agent has three characteristics. It has access to one or more models. It has memory and has access to one or more tools. Which is kind of what you're saying. [00:41:30] Speaker B: Yeah, exactly. Yeah. It's all of these. Right. It could also. Yeah. And it can persist things in many ways. It can persist things into its memory, it can persist things into whatever system it's running on. [00:41:43] Speaker A: Right, right. [00:41:45] Speaker B: It has persistence, or at least temporary persistence, which is hilarious because it's two very different concepts, which helps it get to more complex things. And it can spawn more agents. Let's not forget about that. It's recursive. [00:41:59] Speaker A: Right. And I ask about this because, like, it seems to me that we all have not we all the big kind of frontier model companies have put a lot of their effort into tools for developers. Claude code and Codex and Mistral and whatever. Agents are agentic tools to do coding things. Yeah, I don't see anyone doing the same. Kind of like, who's the Claude code for marketing? And you could say like Claude code is really a bad name. It's really like Claude agent and you can do a whole bunch of marketing in it. And like we have like we run and production several Claude code projects to do marketing. They're amazing. [00:42:42] Speaker B: Yeah, that's cool. [00:42:44] Speaker A: But they are really just like fancy versions of a chat bot. You go in, you do a thing, it gives you a thing back. The thing back is amazing. Like way better than any other tool I've ever used. And I think it has to do with just the amount of context that it has. Right. It has hundreds of pages of context, but they're kind of dumb and they're manual. And like the. I guess the reason I bring this up is like, I think the next frontier for any company becoming more AI forward, which should be the goal for all of us, is to let the agents become autonomous and continue to run and continue to make decisions and spawn sub agents. Like the vision would be you just have a marketing agent and it has autonomy and authority and tools and permissions to do marketing and that's run paid ads, it's do SEO, do social, do cold outreach and it is just. [00:43:49] Speaker B: Fuck. [00:43:50] Speaker A: It's not that hard, right? It's like there's six channels, each has its own sub agent. The top orchestrator agent calls those, gets data feedback, it pulls Google Analytics, it pulls stripe information and it just does marketing just like you would if that was your full time job. Yeah, we're not there, I don't think, [00:44:07] Speaker B: but we're on the way for sure. Like I see people experiment with this. A lot of marketers that are kind of techie, they have figured out that cloud code with an MCP connecting right to, I don't know instantly or Apollo or these kind of tools is a godsend. It pulls data from there. It automatically creates LinkedIn lookalike lists and then uploads them into the LinkedIn Sales Navigator or whatever. You can connect these tools already. It's stone age tool use. It's kind of what this sounds like, right? [00:44:36] Speaker A: It is like the little. It's like the wooden axe in Minecraft still. That's where we are. [00:44:42] Speaker B: But it is an axe, right? People already know that it can be sharp if they wanted to. And it's hard to make it sharp, but it can be. And the next iteration, like continuous agents. I'm excited by this just as much as I'm scared of what might go wrong. Although same goes for people. But hire the wrong person and stuff goes wrong just the same. Maybe even more stupidly. But the problem with automated system is that they can very quickly operate at a scale that you have a hard time comprehending. Same with code. Right? Maybe that's the perfect example. The fear that I have is not that the system couldn't work. I think it already does for some. And within the next year I think we'll see the first platforms to supply us with these agents where we just have to put in our API keys or whatever to the services that we want them to use and then we have a setup step and then it just starts learning and gets confirmation on whatever those things are being built as we speak. I bet. Or are built internally for a lot of companies already by their capable vibe coders. It's not even the right phase. But still developers. Right? But when I look back at my. And I'm going to bring this up a thousand more times today, my podcast similarity engine. The fact that we built this whole thing in 20 minutes is a speed that even if I was trying to read it as it wrote the code, I could not have comprehended at that speed. So if I wanted to keep comprehension of my code base as it is being written, I need to tell it to stop and slow down. Which would kind of suck because the outcome was the thing worked after two or three little corrections that I needed to do. [00:46:18] Speaker A: Right. [00:46:19] Speaker B: So if my goal is to have things that work then I have to start understanding that I might not keep up with how they work anymore. And that's the scary part to me that I have to kind of delegate to a non human that operates at Lieutenant Commander Data speeds. If I may use the Star Trek Next Generation. Love it. [00:46:39] Speaker A: Love it. [00:46:40] Speaker B: Yeah, right. Which was. That was always both the joke on Star Trek and the miracle in Star Trek that Data could type so fast and he could read everything by just looking at a page like a couple milliseconds that we are there. That Claude code is Lieutenant Commander Data also trying to be a human and not to die. That's a whole other philosophical thing. But we have the technology and the speed at which it can operate eclipses what a single human can understand without a brain computer interface, which we don't have yet. I guess. I don't know. Do we? But we have to give up this if we want the benefits of that calculation. So yeah, that's what frightens me a little bit. How do you feel about this? Would you give your full API access to all your email services and all your CRMs to an agent and say just do marketing, go for it. [00:47:33] Speaker A: Right. Like today, with no history of its taste and its philosophy. No. Right, but a hundred percent that's just what's going to happen. It's the exact same analogy. Like Claude Code and Anthropic have done this for development, let's just say develop. Nobody's writing, nobody's manually going and writing code. They shouldn't be. I still go into Drip, I go into LinkedIn, I go into WordPress. Why? Like that's so crazy, you know, like why are we doing this? I just, I just. 100% it will happen to some people. Some people, like you were saying at the beginning, like financial companies won't do this. They have too much reputation on the line to where like ah, maybe they'll just be delayed three or four years. Sure. But, but 100% people will get there and then that's just like leveling up that lower barrier again. So everyone's going to do, do marketing. Right. And so then like it just moves the, it moves the goalpost of like what we need to consider further down. Yeah, I want to, I want to wrap up with like one, one, I guess one statement for me, but one question for you for like for everybody, which is like I think one of the challenges I have and you mentioned this is like how do I keep up with everything? Like give a tool or a tidbit or something like the augment code kind of thing. Like give, give a nugget to folks to like help them level up from a trusted source. [00:49:09] Speaker B: Yeah, I think. Let me think about this for a second. [00:49:13] Speaker A: Okay. [00:49:13] Speaker B: Because there's just so much going on. I think generally if you're not using an agentic coding tool like cloud code or OpenAI codecs or what is the anti gravity. Like what are these things called? I don't even know, I didn't even see this. Even the one like the vibe coding thing from, from Mistral yesterday is like, okay, this is great, but I'm not going to install it. I already have what I need. Give these things a shot. Like allow it into your code base and just do one thing. Like if you're hesitant to do this to do any kind of agent decoding, tell it to do a safety audit or a security audit just to tell you the top 10 or write tests. But the top 10 things that might be problematic in your code base when it comes to application security and maybe tell it to use the OWASP kind of security things or tell it to do some research before it does it and then let it, you know, Google for whatever. But tell it to do an extensive safety audit and not change any code. Just give you a markdown document with a report. You will be surprised if you use the current frontier models by just how insightful and complete these things can be if you give it the proper amount of time. That to me is something that everybody can do. I do this on a regular basis. I do this before every push. Like when I have a push like the thing yesterday. It's almost like, you know, these pre or post commit hooks that we used to have in GitHub or in Git where you run a linter on your code whenever you commit something to give it some kind of clear standardized format. You can now actually run an agentic post commit command which is like scan the whole code base for these things. You can security audit or you can do a taste audit. Does it still look like we describe it in our. What would that be? Like a UI library or a component library or something like this. You can build these things and just have non invasive reporting built into your chain, into your development chain. If you don't use any of these cool AI tools at all. This will give you stuff that doesn't change your code, but changes your perspective on the code. And then the oxter thing, I think I've shared this somewhere, I'm not sure, maybe I can give you a link to or just give you a full insight into what my claude code file looks like. It's really, really cool to have an additional agentic loop inside your agentic loop because you can kind of customize that one. All the providers, codecs and code, cloud code and stuff, they have their own internal version what works best for their system. But if you need more structure, you can rebuild that loop and put that in your system prompt. So every time something gets run, the full system prompt gets taken into account. Highly recommended. And if you had asked me about this tomorrow, I probably would have another tool I could recommend. But I think give it a shot, that's the best thing. As a non intrusive coding agent, that is what again, the baseline, right? The foundation that everybody has access to. If you don't do this, why are you not. It's a free audit, right? [00:52:10] Speaker A: Yeah, yeah. And even, Even on the $20 Claude Pro, whatever version like you can, you can run this for hours a day on the $20 version. So there's no, there's no. And if you're, you know, a production environment SaaS app, you have all sorts of git branch rules and stuff where it's not just going to go do crazy shit and push code live for you. [00:52:31] Speaker B: I have another one. Can I throw another one in? Y recently happened to me, I have a lot of VPs, like a lot of servers somewhere that I just like ssh into. I know it's kind of old school, it's not kubernetes or whatever orchestrated, but another really fun thing in a, hopefully a non production system or a system that you can easily back up and restore, you can install claude code on that server and have it fix server issues for you. It's extremely powerful. Like server is running out of memory, server has weird file descriptor issues. Install claude code and let it diagnose what's going on. Like either you can do it in the chat version of Claude or on ChatGPT and you can ask it what should I run? Then you run it, then you copy and paste the output and pull it back in. Or you might just install the agent, have the agent try to fix it for you. Surprisingly, most of the time it's very careful because you tell it this is a production or it doesn't have to be a production system for you to tell it that it is. Right. And then it will very carefully try to figure out what's going on and help you diagnose problems. It's kind of the. The unpaid intern with the collected insight of every sysadmin that has ever left and posted anything about it. Right, right. Super, super useful even on your local computer. But yeah, make sure you have backup opportunities. [00:53:49] Speaker A: Yeah, yeah, yeah, I've not done that. Like I keep thinking like there needs to be a way to deploy Claude code projects. Like we share them amongst our team and like fucking. It's in GitHub and then support people are in GitHub and you're like, what are we doing here? So like I think some kind of like user interface to Claude code seems to make sense. The one I'll leave with, and it's a total downer, is I worry for the financial system. If these agents can get smart and fast and good enough to make the market completely efficient. Because that's the thing, right, is like the market is somewhat inefficient and that's where like alpha exists. If, if that goes away, what happens? Like if, if we have trading agents. I saw this thing on Twitter the other day, like they like the trading eight, they had all these simulations and the trading agents with like real time, you know, permissions and real money, like we're doing amazing, like way better than like, you know, the benchmark funds and stuff like that. I think that's like, you know, you don't know what a black swan is until it's there. But like that's the one that, that's the one that scares me because there's nothing any of us could do about that. You know, if these trading engines come out and just like level the playing field for the financial systems like that, that is where all the money in the world is, you know, like your business and my business is nothing compared to the trillions of dollars in finance and investing in the stock market. That's one that I worry about. And that's where like we start looking at like, hey, can we go buy 10 acres in New Hampshire? Like just in case this gets really squirrely. [00:55:24] Speaker B: It's probably a good idea to have a forced contrarian perspective here because you could argue that an efficient market is a goal, right? You could say that the market becoming more efficient is a good thing. Maybe not for speculative trading or poly market or whatever, but it becomes. If the market is efficient by definition, and I know we're talking about this from within the capitalist system and all of that, right, but if it is efficient, well then the market couldn't be any better. And that was always the goal, right? It is in the inefficiencies of the market or in the knowledge of the market that we make money. But kind of if there are no inefficiencies, that means that is the ultimate goal target state. The agentic loop of trying to make the market better would reach the goal state at that point. Of course, it's almost a philosophical perspective because that level of perfection does not exist where there still is information, advantage and disadvantage. But I think if that is something you fear, then yeah, look for things outside of the speculative stock market. Look for things that, and not just owning land. Like being part of a community of people that is supportive to each other, right? Where you don't have to fight the people around you. Like we are in this competitive market. We all do that, right? We all fight with our competitors. But maybe there is a state for you as a human being to be in that is not hyper competitive like that. So maybe that's the silver lining is to look for human connection in the world of agentic systems. [00:56:58] Speaker A: I think that's where it comes full circle, probably. Yeah, yeah, neat, buddy. This is really fun. Thank you for coming on. It's good to catch up folks who want to check out Podscan FM or dot com. [00:57:11] Speaker B: I got the dot com too. I just have never ever made the move over so redirects to the FM which is the thing. [00:57:18] Speaker A: Good for you. [00:57:19] Speaker B: Yes. Thanks so much for having me on. I always love chatting with you. Obviously chatting to anybody in the podcasting space on a podcast is like the most meta thing you could possibly do. It's great. Love this and I hope we both find like meaningful and like just fun ways to use this technology without succumbing to it. [00:57:38] Speaker A: Right. [00:57:39] Speaker B: I think that would be would be fun. Always, always excited to hear what you're building. [00:57:42] Speaker A: I'll. [00:57:43] Speaker B: I'll check out linkberry. That's that's an exciting. [00:57:45] Speaker A: Yeah, it's cool. It's cool. [00:57:46] Speaker B: My audience is on LinkedIn too wink [00:57:48] Speaker A: so and we'll include links in the show notes for everyone to connect with you offline. I appreciate it. Thanks Arvin. [00:57:55] Speaker B: Thank you.

Other Episodes

Episode

April 09, 2015 00:58:21
Episode Cover

RS009: How to Invest in your Business for Faster Growth

In this episode of the Rogue Startups podcast Dave and I talk about a few different ways we’re re-investing in our businesses to achieve...

Listen

Episode 0

April 22, 2020 00:48:01
Episode Cover

RS214: Implementing EOS with Kevin McArdle

Having a plan, even if it’s an imperfect one, is a great starting point for many business owners. Especially when it comes to the...

Listen

Episode 0

June 07, 2017 00:42:27
Episode Cover

RS094: Lessons Learned Launching

Today we’re lifting the kimono on the Seriously Simple Hosting launch and all of the things that both went well, and that in hindsight...

Listen