What's going on? Everybody? Welcome to another episode of Adventures in DevOps. Warren joining me again. I keep making you feel like the new guy. But it's been like, what a year. Now, almost that long, and I've got my I got my back prepared. It was a recent well I don't want to spoil my pick, so I'm not going to say what it is. But the conclusion is that AI may be making us stupid. The truth is that AI has a huge decrease on our critical thinking or how much we're utilizing it and not necessarily training that skill, and this could be the beginning of the downfall of humanity. And that's all I'm gonna say. I don't know. I sort of take like issue with that because I remember hearing the same thing, like my teacher is telling me all about spell check, like, oh, you're not going to have a computer in your pocket, you need to get over this dyslexia thing. And as it turns out, I do have a computer in my pocket, and no, I still do not know it as well. We're fine. The skill set se ball, but so it's gonna be okay everybody. The same thing happened with calculators as well. But I'll say more about that at the end of the episode. Right on, Hi, Jillian, welcome. Hello, all right, this is going to be a cool conversation. Joining us today, we have the founder and CEO of Warp the Warp Terminal, Zach Lloyd. Zach, welcome, I'm excited to be here. Thanks for having me. I'm excited to have you on here. And just to pick your brain about this because I first saw the Warp Terminal. It's been several years now, so you've been working on this for a while, and it was just like, at first, it was so confusing to me because I was like, wait, this isn't what my terminals supposed to do. It's it's like offering up stuff like how do I trust this? So before we dig into that. Tell us tell our listeners a little bit about warp and and what it does. And yeah, so Warp it's a reimagination of the terminal. You can use it like a regular terminal, so you drop it in and use it in place of I don't know whatever you're currently using, if you're a mac I term or just the stock terminal app. The idea behind it is that it has a much more sort of user friendly user experience, so you know, basic stuff like the mouths works, for instance, but it's also increasingly it's about being intelligent, and so when you use WARP, the main distinguishing thing these days is that you don't have to enter command, so you can just instruct the terminal in English, tell it what you wanted to do, and it will sort of solve your problem for you by translating your wishes into commands using AI, and it looks up whatever context it needs and kind of guide you through whatever task you're doing, whether it's a coding task or a DevOps task or setting up a new project. So it's a totally different way of using the command line that I think it's like pretty fun to use and definitely more powerful than your standard terminal. And like we're kind of having an internal debate at this point about whether or not it's even right to call it a terminal because it's so fundamentally different from what you know that people expect when they use a terminal, but it does work. It's like I think a really really nice to use terminal as well. Yeah, for sure, Like the terminal features are definitely all right there and ready to go, and then it just keeps. I think it's really a cool way to get used to it is just drop it in as your replacement terminal, and then you can start picking and choosing like all of these other things that it has as you, as you get comfortable with it. I want to say I really like that it uses the mouse because I have like a bit of a horror story of trying to get somebody set up with them, and I felt like very proud of myself, like oh look I got the scientist using them, and then they were like, great, how do I use a mouse? And I was like, oh no, So I think I think that's a nice feature. The other thing that it will help you do is figure out how to quit them if you end up VIM. It's not what we're trying to do here. Which is what it's one of our most popular features is you could ask the AI I had to quit them. It's very funny because people people do end up in there and they're like, what, oh. You mean, like quit the application? Not like I quit the application yet, not. Like quit the addiction. Okay, No, people love them. Now that's a twelve step program for that ward it is. They need a new one. They need twenty steps. Cool, So, how long have you been building WARPD. We've been at it for a while, so started the company started during COVID SO twenty like the middle of twenty twenty, and we first launched something publicly in twenty twenty one. And it's just sort of evolved from something where the main value initially was, hey, let's make this tool a little bit easier to use and like fix some of the UX into something that as much richer it is, especially when chat chupt came out, and we were even doing some AI stuff before that. But they've been working on it for a while now. Right on what's the what's the thought process that goes into figuring out how to integrate AI into this? Yeah, so we went through a bunch of different stages. So the first, the first sort of stage of AI and warp was essentially like translate English into a command so you could bring up this little thing and it actually predated chatchupt. Use something called codex, which was a I think an open AI like coding API, and you could be like, you know, search my files for this specific term and it might generate like a fine command or a rep command, something like that, and it's very much like one to one English to command translation. The next thing that we did was when chat chikut. Came out, we did what I think a lot of apps did at that time, which was like put a chat panel into warp and so you could have a sort of chat panel on the side where you know, you could ask coding questions. You could be like, how do I set up you know, a new Python repo with these dependencies, and we give it to you as a chat and then it's sort of like a copy paste type experience where you would take what was in the chat and move it into the terminal And that was cool, but kind of I would say, like limited extra utility compared. To just like doing it in chat GPT. The biggest change that we made was basically the idea that the terminal input where people type commands also could be used directly as a conversational input to work with an AI, and that the AI itself would end up in like sort of intersperse in the terminal session. And we call this agent mode. And so in this world, it's not just that you chat with it, it's that you tell it what to do, and it's able on its own to invoke commands to kind of like gather the context that it needs help you do a thing. So, for instance, if I was like, go back to that same example, like help me set up a Python repo with these dependencies, instead of doing it in a chat panel, which we got rid of, you just type that into the terminal input and we. Detect that you're typing English and not a command. And when you hit enter, it follows up and says like, okay, like what directory do you want this saying? And you tell it what directory and then it will say it'll make the directory for you, It'll CD into it, create the gitripo, it'll do all the pip install, it will even generate the initial scaffolding of the code. If it hits an error, it can debug its own error. And all of this is happening within your terminal session. And so you know, you get to a point where it's like you're actually driving the terminal a little bit more in English than you are in commands, and it's it's kind of crazy how it's changing how people use the terminal. Like I was just looking at this yesterday, like in warp. Now like a quarter of what is going on in the terminal sessions is actually just English and AI generating commands. And not people typing CD and LS anymore. So that was the sort of evolution, so from a very bolt on thing to something where it's like the actual fundamental experience of how you use the tools has changed a bunch. Yeah, so you're completely changing the interaction there. Instead of saying how do I just saying go do it. Exactly exactly, And that actually takes like developers don't necessarily think to do that. They're very much in the like, okay, let me google this, let me go to stack overflow type of mindset, and it's a totally new behavior if you're a developer to just be like I'm just going to tell the computer what to do. It's a little bit scary because like, what's your terminal and it's like now the computer is just like doing stuff in your terminal. But I do think that's the future of how development, DevOps, whatever you're doing is developer. It's going to move from this. Like let me run a bunch of queries or let me like open up a bunch of files and hand out of things, to a world where you're just sort of like, hey, let me actually tell my smart AI whatever you want to call it, assistant agent whatever, to start me on this task. And the you know, the. The agent will loop me in get more info, you know, you know, leverage me when there's ambiguity resolve. But it's like it's like going to be an imperative I'm telling it what to do way of working. And like the cool thing about the terminal for doing that is like that's kind of what the terminal is set up for. If you think about it, it's like the terminal is set up for users to tell the computer what to do. It's just that we're like upping the level of abstraction from you telling it in terms of like grap and fine and cdnls to telling it at the level of like a. Task what you wanted to do. And so that's like the. Vision that we're building towards right on. I think it's a really great analogy, you know, because we've seen that in other areas of software development, where you just keep abstracting things away more and more, yep, and coding at a higher level. But this is one of the few projects where you're actually doing that outside of doing it at the like the task level. Rather than at the coding level. Correct, and like we are so you can you can code and warp. I don't know if did you all see the cloud code? Have you played with that at all? I have a little Yeah, so cloud code. Is super interesting from our perspective because it's it's uh, it's all terminal based, and it's all this imperative like you run a terminal program, you tell the you tell cloud code like, hey, you know, make this change for me, and it skips the file editor and id entirely to do coding stuff. And so we're also we have very similar feature in Warp. It's not it's access that you don't run a program within the terminal, You just tell the terminal what to do. But I think it's interesting in terms of like the types of tasks that you can do and if you even look at like have you all used Cursor and Windsurf those types of apps do any coding? Yeah a little bit. So yeah in those apps, Like the sort of initial feature that was like the magic feature and this is true forgetting Copilot two, was like it will do great code completions for you, So it gives you this goes to text as you're typing and it sort of completes your thought. And the sort of thing that they're building out now is also it's much more like a chat panel within those apps, where you can tell the computer what to do and it generates code dips, and they're creating something that looks an awful lot like a terminal interaction, but. Within the code editor. And so I do think there's this general shift that's going on for coding, and I think it's also going to really impact people who are doing production DevOps basically any type of interaction with systems where you just sort of start by telling the computer what to do somehow. So it's pretty neat, pretty neat to see. So I really like this because I spend a lot of my days trying to convince biologists that, like, you need to be able to use the terminal at least a little bit, and it's always touch a tough sell because being like, well I'll go over here and take this Linux class is like not not what they want to be doing. Let's say, so just being able to say why not, just in English and it will at least get you to the directory and install your Python environment and do this kind of stuff is just so much nicer than what I've been doing in the past, and I do I like this. This is great. Yeah, I mean it's it's the other cool thing for people who who are it's not their natural environment, let's say, and like they have to use it. Is that. As you use warp to do this stuff, that teaches you. So it doesn't just like off the skate, like at least for now. The way it does it is like you type in like, hey, I want to create this project, and it says something back to you like, Okay, here are the commands that need to be run in order to create this project. Are you cool if I run these commands? And so to warn to your earlier points like is this just making this all like kind of dumber and not knowing how to do anything? It's possible, But there is also an aspect of like it's kind of like working with like the smart person on your team who can show you how to do things, and like, you know, hopefully you pick it up because it is it is in some ways faster if you know what you're doing, just type the commands. And I think in general, like I don't think it's a great outcome if everyone who's doing development or working in the terminal doesn't know what the hell is going on, because inevitably you're going to get to some point where you kind of need to know in order to fix something. And so you know that the hope is that this doesn't make people like dumb, or this makes people more proficient, but there is, like I think there's a risk for sure. There's there's actually two things that this reminds me of a lot. And the first one is a long time ago, and I don't know how well it's maintained, but there was a program that you could install into your terminal called fuck. Yeah no, No, we've partnered with that exactly. You've never you've never seen this before. Something that actually happens sort of often is that a command line program you run will tell you sort of what you did wrong in a way like did you mean this, And instead of having to like retype the command and fix the problem, you could just type fuck and it would read the output and then do that thing. And that's the first one. So you haven't seen that, Like I highly recommend at least, you know, checking that out. And the other one is this thing that totally changed how I use the terminal for doing software development, for interacting with GIT repositories. Is there's actually a get configuration that you can set up to automatically fix typos. So if you type something wrong, it will swap the letters around and be like, oh, okay, you probably meant this with a ninety nine percent accuracy, and then just do that command anyway. And you can also set a time out, like you know, if you accidentally type something and it's gonna start deleting all of your code base, you can be like, oh, wait, no, I don't want you to do that. But that actually brings me to a question I want to ask, which is I see more and more of these pieces of software I'll call them agents that are interacting with your operating system directly, and for me, like I'm super risk goverse, like I don't. I want to keep every LM or non thinking creature in its own private box where it can't accidentally delete like my entire operating system, because that's what I thought I wanted to know. It's just like, might trust the agent with myself? Yeah, go ahead, point I think, yeah. Like, so how do you how to manage this is the question? Or yeah, I mean, it's just it's almost like I would want to run like two computers side by side one of them. I mean, I already am really concerned about running external software on my machine from Ali like a malicious standpoint. Very rarely is it will break my operating system. I don't remember the last time it happened. It was probably when I was using Windows, like over a decade ago. Uh. But when it comes to LMS and things that, like I know from firsthand experience, sometimes it's like there's a non zero chance that it just figures out the wrong thing to do. And and like that's the sort of thing that I almost want a sandbox as much as possible, and I feel like we're not getting closer to that because our operating systems aren't don't allow it as much. So it's a great point. I mean, you have a couple of choices. If let's say you're using warps, so one you can just turn this stuff off, like if you're just like I don't trust that, I don't want it. So that's fair. There's one there's like a toggle that just says AI off and like that's it. You're back to, like you know, you're in control. There's also like a sort of like you can control the level of autonomy it has. So the the one of the levels that you could have is just like it can't do anything on its own, so it can suggest commands, so you can then manually approve anything it suggests. There's a level up from that, which is like you can kind of provide like an allow list and denial list. It could be like, oh, it's fine, it can run CAT, it can run less, can't run r M. You can go a level up from I feel like I wanted to be able to run read only commands and let let an LLM determine what it thinks as a read only command, which it's pretty damn good at but not perfect. Like if you had some crazy like piped thing or like hero doc or something like that, it might it might get confused, but it's pretty good. Or you could be like you know, like yellow, like. Like I just wanna it's not that big of a deal if that messes up my like get ripo or whatever, and I'm gonna let it run. And then the other thing that we're working on that we don't have yet but I think is really important in this world of like more autonomy is is what's the fastest way to like spin up. A sandbox where. You know your whatever state you want it working on is replicated and it can just go to work there without without you losing any sleep. That's gonna do something irreparable. I think like an undue functionality is super interesting too. It's not like trivial to do that in the terminal. Like the terminal is a stateful place where you know, kid deal. Files and there's no like undo. Uh so you kind of got to figure out, like like sandbox is try of the safest. But we're we're we're aware of this issue and it makes it makes sense. A surprising number of people don't give a ship. I will say, like they're just like this thing is just magic, and like I just it makes me so much faster and makes makes my life so much more fun that I don't really care. But it's a totally fair point. I wouldn't. Like they're not using this in NASA. And I'm like, you know, well, I think I think you you really yeah, not yet right, but probably Honestly, I have some theories there, but I think if I say them will definitely get canceled. So Uh, yeah, I think that's sort of the problem. And I think this is again I don't want to spoil my pick, but realistically it's that a large majority of the population falls into this area of maybe they have concerned, but they're uh, they're apathetic to actually turning off whatever. The source of the potential problem is there's not a good way to moderate AI from outside or l MS from outside the black box. You It's really like all or nothing in a lot of ways, and most people are not going to turn it off because they still perceive some huge amount of value from from utilizing them. And so you know, I'm not going to turn off the future. I'm just going to be really scared now what it's going to do when I'm not okay. Yeah, yeah, I think that that's right. And people obviously have a strong predisposition to do whatever you said the default too. It might not even like know what the heck is going on, but I don't know. Developers are maybe a little different, Like I feel like if anyone's gonna go tweak the knobs, it's gonna be like you know, except I don't. I don't think so. I think everyone has their their depth where there feel comfortable controlling and when if there's if they're comfortable pulling an LLLM to solving part of their job or part of what they're doing, it's probably in an area they don't care about, and so they're probably not going to. I think another aspect here is I have a very close friend that went away on vacation and they're the person who was cat sitting for them left some plastic on the stove which was induction but and it was totally fine, it was off, but one of the cats managed to turn the stove on and actually melted the plastic. Yeah, And so this is really funny though, because there there was no LM in there right the cat, the cat was fine, the cat, the cats were fine. The thing is like, I really do fear at some point like there is gonna be in someone's gonna put an LM in my in my stove. It's going to happen at some point, and I don't think we can avoid that future. And I do fear that it will just turn on one day when I'm not here and start doing things where like I have no I have no need for that, and I don't I'm not not thrilled about this future, but it's coming. Kelsey I Tewer had this good tweet which was he was like, I'm actively at the point where I will pay more to not have a smart appliance. So I was pretty much like, I get it, Like I don't need like my refrigerator having Wi. Fi or whatever. That makes sense on the on the MLM side, for if you're a developer. This might not be a popular opinion, but but I think you're not really going to have a choice as a developer if you want to continue being a productive developer on whether or not you. Adopt this technology. It's kind of like being like, oh, I only want to work in assembler. I'm not going to use like a high level language. Like that's not a viable choice going forward. The I think what you're gonna have to do is developer if you want to be like product is like, learn how to use all this stuff and learn how to use it in a safe and productive way. Is that unpopular? Let's have a fight. No, let's let's let's go around. You know, Jillian, what do you think greed disagree? I think so, Like I'm pretty judgmental over development over developers that don't use a debugger, so like I can see this kind of being just the next, the next sort of iteration and that process. Yeah, because I don't know developers are I think at some point, like everybody's kind of drawn to development because everybody has I like to learn new things, disease, and like writing code is really good for that, and then at some point you get really tired of it, and so then AI is really good for that like process when you're like, all right, I'm sick of having to learn the new things. Just I just want for the AI to tell me what to do and then there we are. So I'm gonna go with mostly yes, except that I feel like I might get some angry responses on the Internet for that, so I'll give like a little bit. Like there is a fear and understandable fear that developers have this is going to replace them. I don't think that's even remotely true. There's also like a thing that I've noticed, which is that a lot of like the more experienced, really strong developers on our team and who I've worked with, like they kind of get the least value out of it initially and are most likely to be like, oh, this is a stupid suggestion from this thing like or it's like creating bad code. And so they have like a kind of anti take on it, but. Eventually people get to a sort of moment with it where they're like, oh shit, this actually makes my life a lot easier and does some of the stuff that I find super annoying. And they, I think the proper outlook to have towards it is like, this is like another tool that I can use, just like if I like master like said and grap like, I'm like awesome as a developer. I think if you could figure out how to effectively use the l I think it just makes you better. I think that's like the right for now, the right way to look at war. What do you think? Well, I have the opposite controversial opinion, so you know, I was maybe thinking about keeping my mouth shut. So I have this perspective that it definitely replaces inexperienced engineers. And so the problem with that is, and I think this is where the fear comes from, is that lms do not replace inexperienced engineers. People think that elms will replace in experienced engineers and do that anyway, And I think we're already starting to see that happening. And the problem with that is you're paying money for these tools and you're not training your organization's people on leveling up their skills in these areas and will become more and more dependent on them and definitely move away from it. Now on the productivity side, I still think it's way it costs way too much. I think there has to be magnitudes cost reduction in general eating answers before this becomes of high value. Mean like monetary costs. Like the monetary environmental et cetera. It's still like, uh, none of the AI companies are making money like the ones that are pumping out AI. You know, Open AI. I'm sure whatever. We know. Anthropics not making money. We know whatever they are, they're zero, Like it's negative negative billions of dollars per year on this. So you know, that's not sustainable model from a society standpoint. There's gonna have to be something to change. Either these tools will completely go away or the costs will have to come down. I think the last thing is that we find from a productivity standpoint, at least for me and the myself and the companies that I work with, is that the bottleneck isn't doing more work or specifically writing out code or pushing that out, so the tools don't solve the needs that we have. It's okay for us to still be slow in this way or not be productive in this way, because that's not where our bottle that gets. I disagree with almost everything you just said, but I'm gonna. Like it's interesting. It's it's interesting to have this, uh, this discussion because I'm so in the like AI bubble of like like Silicon Valley people and like AI tech companies and like, like the main contention that I hear amongst these, like the people I talked about on the investing said on the other A company side is like how quickly are we're getting to a g I? And Warren is coming in hot with being like these things are not even. Valuable, They're not it's not even AI. Like I hate this term that like we we there's these companies are lying to the masses of people saying we have AI. All we have is transformer architecture which is able to utilize you know, create l MS, and they will always hallucinate And that's the ridiculous thing. Like I'm waiting for someone to say, how is open AI going to recoup the billions of dollars they are losing every single year? Like where does that? Where does that change? Because money will run out at some point? Oh well, well you want to go or do you want that? I'm going to jump in real quick. Then we can come back to that. I this is good. I think I tend to agree with you, Zach, that there's going to be people who are resistant to AI. And I think the primary place I've seen this is people who really are They're really passionate and invested in their chosen language. You know. I think if we look at the category of people who will argue Go versus Rust, and they've pinned, they've pinned their career on I'm a Rust developer or I'm a Go developer, and and so they'll try something like AI or or any of those related tools and say, oh, well it got this wrong. That's clearly why I'm I'm not going to rely on this thing because it got this one thing wrong. And you'll get a lot of resistance from those people. I think the. AI is like another tool. I mean, I guess more than what you're saying with like all the money being spent in the environmental cost That is very valid. But from the tool perspective, it's like I'm already so dependent upon tools like without dictation software, pie Charm and VIM, I'm completely useless. I have like zero utility to anybody anywhere at any time, like in a professional context, anyways that I am, you know. I mean I do have kids occasionally, I'm useful in like a human context, but like from from a professional standpoint, if I don't have those things, I'm not going to get any work done. And so AI has just become like another tool for me to use. And so I just see it from that perspective. From the money perspective, like I don't know, but humanity is a bunch of money on a bunch of things that we don't recoup an investment from. Like it's just the money never actually runs out. We don't have a gold standard anymore, Like there's there's always there, Like it's an arbitrary concept. There's always money. As long as the printer companies keep making printers that print the money. I mean, isn't that kind of what we're doing at this point? Though? Like isn't that what the governments of the world have sort of decided or doing well. There's a secondary problem here, actually, which is that the energy consumption is too high. Like even shave off the environmental impacts, the energy cost is so high that people are now starting to have their lives affected by having spotty continuous energy flow into their own appliances in there and their house lights and stoves, ovens, whatever. And that's happening near data centers where increased energy usage is required to run LM. So I think that problem is likely to get worse even if the money doesn't run out. But if you had a smart refrigerator, it could address for that exactly. If the things are smart, you know, then then what do you even need the energy for? Now, We're fine. I like the perspective. I mean, it is a tool, for sure, And I think the thing that I see is that it used to be the fact you could type into Google and get a website that helped you answer the question you have. And you can't even do that anymore because at least that search engine has become utterly worthless, and so you need a replacement for that. And I think it's worse from a accuracy standpoint than Google at its best, but it's for sure better than Google now, and I think that's a worthwhile trade off that you have to change if you're still using Google, or you're still believe that your one true programming language is the only one for the future. I think that's just the mindset which doesn't make sense. So, Zach, you wanted to come back and answer or respond to the money issue. I can't speak to the energy stuff. I can't speak to just like it's valuable. So for for developers paying twenty to forty bucks a month for AI in their core tools, if you just think of how much development time costs you have to save I don't know, twenty minutes or something for that to be a worthwhile thing. And that threshold has. Been crossed a long time ago in my opinion, just from using these as a user of these tools, the amount of like time that they save me, it's like a no brainer trade off. I don't know if anyone on the back end of this is making money yet I do know WARP like we have a positive margin when people pay us for AI, and so it could be that the model companies or the hyperscalers are just taking a huge loss on warp's profit. But it you know, from like just pure economics, people find the value, they pay for it, they stick with it like a surprise and higher like we don't have very high churn on it. And so I have to believe that just from that and from like actually using it, that there's a ton of value. It's certainly true that these things are not infallible. And like, I guess you could debate from like a philosophical perspective whether or not they're intelligent. Actually think they they they have some level of like intelligence. Now it's not like it doesn't quite work the same way that human intelligence works, but they're able to, like I don't know, they're able to do things that up until a couple of years ago you would only say a human could do. So there's it's I personally am like super excited by the progress, Like I was a like I studied a bunch of philosophy. I have a philosophy degree in addition to a CS background. I think it's like absolutely fascinating what it says about what intelligence means. It's not, like you said, it's not perfect human intelligence, but it's something and it's like I think it's a it's a pretty awesome technological advance. So I'm more pro AI, more bullish. I think Warren's a little bit more on the skeptic side. That's all. I think. I can't assign the word intelligence to it yet, and because because of the architecture that it's utilizing, it's just a probabilistic word predictor. And I think we need a different architecture other than the Transform architecture to actually reach anything that would be fair to call AI in any capacity. I do want to jump into how you're utilizing it though at WARP sure are you are you running your own foundational models or are you passing queries to something configurable for like I can put in open AI apike or anthropic apike, what's going on there? You can pick your model. So we we support the anthropic models, the open A models, Google's models, We support you as hosted version of deep seek models. Even some of the open source models. You you can't go directly to them because our server has like a whole bunch of logic on, like the prompt engineering and sort of different agents for different types of tasks, so there's like a logic layer between them. But the basic the basic intelligence underlying the AI and war currently is. The foundation models. There's a chance at some point that we'll get a little bit more into the like make a model to protict your command type business, but currently we're we find that the best thing for our users is to sort of use the like we're not going to spend billion dollars on you know, GPUs or whatever and trade models right now. That would probably change profitability statement you just made earlier. Yeah, well, so I would say, like we are at the application layer. If you look at this as like application layer, model layer, hyperscalar worth the application. There, No, it makes makes sense. I mean, but in in that way, the model providers are definitely subsidizing the profitability because they're taking huge losses. I mean, I don't know who's making money from there. It's just a question of like where's the value going this whole thing, like the you know, the other thing. Like, so the model providers, I think, like the big question mark to me. Is like open source models and like if you have open. Source models, especially ones that are like comparable quality, if like like open AI and anthropies of the world can't maintain like a like a real lead in quality or latency or something like that. How does how does the world work in that? And so the open source alternative where you run it yourself, uh and you don't have to pay the sort of margin to open AI is. Super interesting to me. I think that the one place that there's definitely going to be someone's gonna make money. It's just on like serving these models. So I feel like, for better or worse, if you're Amazon and ANWS you know, g Cloud, Azure or whatever, they're gonna make money because someone needs to serve these models. And the local versions, which is I think another interesting thing to consider are at least currently they're not at the they're not at it's not really practical to like get the same level of power from like downloading you know, Lama. But that's another thing I'm looking at is like maybe it's just local models that totally disintermediate the need for these. Like huge API based cloud models. Who know, no, I mean, you're you're onto something really there because the like you, it would cost you way more than the price that you would pay to the model providers to utilize their llms if you tried to run the open source models locally on hardware that you know is comparable and gets you speed and accuracy precision in order to utilize that. Yeah. Yeah, we should talk about WARP some more and like its features about whatever. I like, speak into the terminal my commands so that I don't have. To do it. So we added this feature. It's super cool. If you're using WARP. You can hold the function key or you can configure it, uh, and you can. Talk to your terminal. It's magic. You can you can just tell it what you want to do. We uh, it translates it into English and then it runs it and so it's it's pretty star trek y from like a user experience standpoint. So yeah, that's that is something that we wanted. Saving the people from the repetitive stress injuries like this is what I. Why should people have to do anything? Like I know what Gillian's waiting for that she wants the brain interface device. Exactly what I want. I think. I think it would be a really good WARP. I'm getting the sense that that WARF is actually very well suited to Jillian's work force. It really is, Like, especially since you just said the speech thing, because I'm getting older and I can't type so much so like I very specifically need the speech. Thing, and why should you have to say it? I know I shouldn't. That reminds me of like an episode kind of person here. It reminds me of an episode of The Simpsons where Homer's in the hospital and the guy in the bed next to him is on a breathing machine and He's like, hey, how come that guy gets someone to breathe for him and I'm over here doing it by myself. See, I thought you were going to bring up the episode where he tried to get to three hundred pounds so he could be classified with a disability use a wand to dial. That must be an old episode, like that must that must be a real old episode. Yeah, that was when it was still good. Yeah. Doctor Nick's food philosophy was if you rub a newspaper on the food and the newspaper turns clear, it's good to eat. I mean, one of my I'm pretty lazy, and like I'm not like ashamed to be lazy when especially when it comes to when it comes to development. I don't want to have to. Do more work than I have to do to ship something that's useful. So like my like what I care about as a developer. Like again, there's different kinds of developers, but to me, I'm like all in it for the I want to build something cool. I want to ship it out to people. I want to be proud that I built it that I want it to work really really well. And I want to do that with like the minimal possible effort and the extent that I have to put effort into it. I want it to be effort that goes towards thinking about how it ought to work. And I don't want to spend effort on like annoying shit in the terminal, Like that's like the last place that I want my limited brain cycles to go. I don't want to spend effort either on like changing function signatures in my files. I just I know what I want it to be, and I want like to get from A to B as quick as possible. And so yeah, it's the extent that something like AI and I think WARP for the terminal especially like makes it so I can be like a little bit lazier. Again, this isn't like the advertising I put on our home page or whatever, but I think it's like I should have a. Valuable things advertising it's great. And like, honestly, like a lot of the best developers I've worked with in my career just kind of all about that, like, don't make me spend my brain cycles on like tedious shit and toil. And so I feel pretty good about trying to eliminate that stuff for developers so that you can do the more fun stuff, because the really fun stuff. Is like it's like, to me, at least, it's like, how should the product work? And then it's like, how do I architect this thing so that I can make the product work the way that I want? And then the least fun thing is. Like the like typing in the like words in the text editor or the terminal to do that. I don't know if everyone's in the same way. No, I think I gore onto something. We was going to say it, yeah, No, I was going to say, it's much more exciting to work on how the application works than how to center this fucking div. Right that vertically. Under the vertically on the page. That's that, that's the key. Vertically, you know, horizontally you just use flexbox, no problem. I think, well, you know, there's an interesting thing here, because I feel like if if we take this to the natural conclusion, it's probably like the managing directors who will then be responsible for building the product by communicating with the AI that we technology that we have available and not needing so called technology department in any of our companies anymore. So that's like a horrible outcome to me. I think it's product managers making software. I mean, arguably that's what's happening. Yeah, I mean argue that's what's happening right now. There's just a couple of you know, people in their way that are telling them that they can't have it exactly what they want. That's interesting, That's well, I think that's that's not how it works at work, Like I could be at work to some places. But so at work, for instance, we build something and we may be again we may be different than other places. It's primarily engineers who are driving the product direction. Now we're working on a product that is used by we use. We're the customer, we're the audience, and so we have this awesome virtuous feedback loop of like we build it, we use it, we like something, we don't like something, and so we drive a lot of it. I don't want to change that at all, Like, actually, I think that's not a good thing to change. And so. I also just I don't think as bolish as I am on AI. I don't think that we are close to the point where you can build something meaningful without having some technical knowledge. And if anything like again this is probably is not the prevailing wisdom. But it's like I think you need to be more technical to be able to sort of guide and correct and be the like the tech lead for an AI. And it's if you are a aspiring developer these days, I would say, like learn the shit better, like learn the fundamentals to see us better, because if you want to effectively produce software in a world where you have someone who's like pretty smart but also kind of like a savant and like dominant a bunch of ways, you need to know what. The heck is going on for when you hit a wall. And so I think, you know, I don't think we're close to a world where it's like MBA is building all of our software no fins to nbas. Nbas are great, but like I feel like it's you're gonna need people. Who are experts in order to effectively use this tool to. Get its pact capacity. And I do think like Wren, I don't know your point out, Like if you're really junior and you don't learn, if all you do is say maybe like you've only learned how to build web apps, I do you feel like you're like a little bit at risk. Like my advice to those people would be like up level your CS skills. But I don't see a world anytime soon. Where if you're in a professional software development setting, that developers are going away. I sincerely hope not. I mean, I'm screwed if I think it's the de leap there that's problematic. It's that we know you need the skills in order to utilize LMS effectively. Like you're not going to be able to just job off your entire brain to this vehicle and have it go at full speed without thinking. It really does require critical thinking to interact with it effectively. And then that's what you're saying. And I think the problem is, Yeah, I think part of the problem is that some companies believe that that's not necessarily the case, that you can delegate this out to an LM and have it. Some companies are just buying the hype that we don't need to hire developers anymore. I and there are companies out there that are, like, you know, we are an agentic building thing. There is like the software developer Devin or whatever. Sure, yeah, so I mean, and I think what I'm saying is, I know those can't work. But I think that's those companies will find out when they try to replace their development with Devan. Yeah. I don't like building dev is Devin building Devin because I don't. I don't think he is or they are doing it that. Yeah. But I think the bigger problem is that the leap from hey, I'm someone who doesn't have technical capabilities to I want a job utilizing technical capabilities. That gap is growing and harder to get into it now because the technology available for us interact with is much more complicated than it was five years ago, ten years ago, twenty years ago, and the skills that you get from even training a little bit, like teaching yourself skilling up skilling even a little bit, is much further away from what companies are looking for. At least that's my perspective that I think I'm seeing. And I think the LMS are contributing to that gap. I'm sure like, Okay. So say you're a company and you're spending one hundreds of million dollars on software developers. I'm sure you're like, God, I would like to spend less money and have equal output. And you could be like, Okay, I'm going to hire AI software engineers the DEFN example, And I've tried Devan and it's a neat vision. Devin, I don't want to I'm not gonna shoot on Devon. It didn't work that well for us. I know they're improving it, but it doesn't. It's like that model today does not work. Will that model work in five ten years? I don't know. I'm still skeptical. I think any company that finds that they want to improve their. Cost efficiency on the software side by placing their developers is going to be in a I think it's just they're going to find that they don't get the ROI on that, and that the better ROI right now is to empower your developers and like give them tools that let them be more productive. I'm saying this. I'm obviously super biased. I run a developer tools company where I'm building something where the mission is empowered developers. But I truly believe that that's like the right way to approach this. And you know, it's like companies will try whatever they're going to try, but I think that they're going to stick with something that actually gives them the result. I don't think that they're like the economic incentives are such that like if JP Morgan replaced all their developers with AI software engineers and then like all their bank and transactions. Failed, they'd be like, this is not the right move. And so I do think that there's like back pressure on doing something that actually works. I think that's a great model, and I encourage them to do that, and then when it blows up, I want them to get over to my website where I have my consulting rates listed. Exactly they're gonna need. They're going to need some smart people. We actually went on smart people still, Yeah, I mean for sure, for sure, I mean we actually went We actually did a deep dive in this area in our episode on the Develops report from from from Dora in twenty twenty four. Okay, where like the I don't know if you've read it, but the actual results was like the value that lms were providing to organizations was suspect like it was. It wasn't significantly different than where they had been before. It was very difficult for organizations to justify that the value to the bottom line or the value to the products that were being delivered. I think the interesting thing was the one thing I did say is that people were happier with using the LMS, but it didn't actually reduce toil and it didn't didn't reduce the amount of time spent doing things that they didn't like, which is interesting. I think it gives the most value to people who are positive optimistic about AI. So if you like AI, you should use this. I can tell you experience from WARP. So so there's the way we think about users coming into WARP. There's some users who are coming into WARP because they're like I love AI. They're like, I want I love this new technology. I want to like use it in all my tools, And those are great users for us, Like they come in they're like, holy shit, I can I can use. A terminal in this totally new way. That is not the majority users. So the majority user for us is what I would call like an AI neutral like developer who might be like, Okay, I'm open to this, but it's like there's a lot of hype, I have a bunch of inherent skepticism. And for those users, the challenge for us is to get them to actually see the value of the AI and like actually use it. And so the the way that we've like figured out how to do that is that it's very similar to that tool that you mentioned earlier, the fuck and so like the like when you have an error in the in warp and it's like, oh shit, like I'm missing this. I don't know if I'm a lot to sware on this podcast, but I'm missing this. Uh, you know Python dependency. We show something where it's like, hey, we can fix this for you, and like all you have to do is say command enter and we fix it for you. And then that's like a like a conversion moment. And so like, I guess my point here is like kind of piggyback them off. Your point is like there's some people who are just like into this and like they're gonna love it, and maybe they're they love it even if it isn't really helping them and they're just like messing around with LMS all day. But I do think, based on our experience converting people who don't inherently want to use this technology that there there must be value because we have, like I said, we have a lot of people paying us for something that that like, and I don't think that people are just going to pay us for something that I'll find valuable. Sure, and a lot of them were not AI enthusiasts to start. There are people who tell us like, oh shit, like this thing just saved me hours and I love that. So that's like the you know, my kind of counter to what you're saying. Yeah, I'm really curious, you said, so the commands are going through this proxy layer that you're you're hosting and before interacting with the model prior. So I don't know if if you can share, but maybe there's some interesting metrics or data that you've been able to collect based off of what people are looking for, what has been searched, what sorts of problems are being fixed, anything in this area. Yeah, so we have a group of like alpha testers who give us like data collection access essentially, and so really common use cases where we're helping people are the like install dependencies, the like. Get my get is messed up, Like I did something. I mean some weird get state and I need to get out of it. We are increasingly fixing compiler errors for people, so intrest of like simple compiler errors and the air log isn't the terminal we fix. We get people who. A lot of like Kubernetes, Docker Helm, like those types of issues where there's very heavy command line usage and kind of you know, pretty complex commands that you need to do. Is another really popular area. We do things where we write scripts for people to automate things that they're doing over and over again, and so you know, it's it's a mix. I would say, like though, really like prime use cases for us to start are things that are pretty terminal oriented. And then increasingly as people realize you can fix coding stuff and work, and we guide them into that the coding stuff matters a bunch too. Because just like developer spend a lot of time writing code. I think one of the things that that doesn't really get highlighted enough is that there actually is a pretty steep learning curve to using these AI tools. I think there's an expectation that oh, it's AI, I just go in and it's going to make my life magical, but really my experience with it has been learning how bad I actually suck at communication? And that was the first job. Yeah, Like that was my first job, is to figure out how to communicate. It's weird. It's turning every programmer into someone who needs to know how to write, which is like kind of a crazy skill. But like, yeah, the quality of what you get. Out of these llms is highly dependent upon how good you are prompting them, how good you are at providing them with the right context to. Answer your question. And if you Yeah, who would have thought. That, like being really good at like writing English would have been like the core thing. But I guess, I guess like people engineers write design docs. It's not that different from that skill. It's a real behavior change and it's a real skill, and I think that I think it's a great observation. Agree. I mean, I know, I went to university specifically to study engineering so that I wouldn't have to read and write words. And now my life I pretty much just write a lot of blog posts, knowledge based articles, you know, chat with lms, like it's every single day, like there are it's just words. It's just words. That's that's my whole life now. Yeah, yeah, I think it's worth elaborating on though, just just like that's one of the reasons I'm being more pushing people more into AI. It's like, yeah, I know, you get it. You tried it, it made a mistake, and you're ready to write it off. But I really need you to stick with this and learn how to use it, because by putting that time and effort in now, you're going to figure these skills out and learn how to make it productive. And then as the technology self improved, you're going to start getting to take exponential benefits of that, and so you and your career are going to be way way ahead of everyone that you're sitting with now who says, oh AI sucks five years from now. I believe I'm one hundred percent with you that that's like that's the smart approach. Is like, yeah, I think it's like the tool analogy is the right analogy right right now, where it's like you can't get mad at them if. You, like I didn't learn, I can't get mad. But it's like it's like counter productive. And I think if you've remove the hype for a second and just think of it as like it's a computer program that you're using it's like, yeah, you got to learn how to use it, and like, you know, what is it like R T F M, Like I kind of hate that, but it's a it's like learn how to learn how to use it. If you want to get the most out of it, is one hundred percent right and if you are if you think of it instead as like a dumb coworker you don't want to associate with. But that dumb coworker is like someone who's on your well, I don't know where I'm going with this. Think of it as a tool that you got to get the most out of. I think I think you're onto something there really important because I think one of the things that a lot of the elms we see out there, and I think this is where some of the value is definitely lost. They don't do a great job of teaching you how to be an effective prompt engineer, like how to actually create communication with the tool, to Will's point, and I think part of it is because those same companies have no idea how their own thing works, so they can't actually give good recommendations. But I think they do figure it out over time, because there are communities that pop up that are discussing this and then they bring that knowledge back in where you know, we see examples where like the Dalli model that open ai has is the prompt is being mutated by their one or whatever based on what the user inputs, because it's just nonsensical and needs to be mutated. And like those instructions, it would be great to be exposed. And I just feel like these tools don't do this good of a job. But you work on the application layer, and so I feel like you're providing a much better experience for teaching people how to utilize the tool effectively because you have to because you're actually selling a real. Product, right right, No, No, And it's like it's a thing that we're constantly thinking through, Like we have a feature that is suggested prompts essentially where you know, if there's a like the most common use case again is like an error error resolution, but well, based on the error that we see, we will suggest to prompt. And the prompt probably is a little bit more than just like fix this, which is what a person might write. It is probably like fix this russer that is caused by incorrect mutability, And like you provide, we do everything we can to make it the minimum amount of work, and also to show the user like, hey, here's what we're actually telling the model, so that if you want to do this on your own next time, without like Warth doing it, you can do it. So that that that's it's a key skill, like totally ride of it. That's something that matters. I think this kind of shows my bias because like forcing the developers to have to communicate properly, I just don't see that as a problem. I'm like, this is a good thing. This should be a feature, not a butt. Well, okay, maybe I'll put this into perspective, Jillian in a different way. It's communicating correctly is a subjective perspective based off of the people involved in that collaboration. When you're communicating with a second person, you know there's a culture involved there, your values involved, the definitions of words that you grew up with, all these things in that and when you're using an LM, you don't like it's challenging to figure out what its culture is, how it responds to certain things, and so you have to learn that tool. So I think there's a difference between like you're not becoming a better communicator. You're becoming a better communicator with that thing. And I think is it the good thing to force people to do, I mean communicating with other human beings that you work with, Yes, for sure. Forcing them to learn how to use, you know, end tools out there that all are slightly different, have individual mindsets or cultures or whatever. The corpus and material you know that I think is open for challenge and debate. But I just see this as like a people living in society kind of issue. Like when I was a kid, my dad was like, you're going to take a typing class because that wasn't just an automatic thing back then, you guys all right, like this was this was a while ago, And I kind of just see AI as sort of like I think it's it's very like pivotal, it's paradigm shifting, but it's it's another iteration of that. It's another tool that we're adding on that people are going to learn how to use that everybody's going to have to use, just like I don't know, like now, my kid's just I did not have the option to sign them up for a typing class or not. It's just part of their curriculum that they are just doing. I think you should put them in a typing class. No no, no, no, like an old school with the old with the old man, just to screw with them. See you know here here I have I have, I have good parable here because when I was in the fifth grade, I think I was in a typing class in my school was provided a public school. You got a type in class, and I learned a S D F J K L semi over and over again for for a year. And realistically I don't use quirdy. Actually I find it to be a lackluster, subpar keyboard layout. And so I was taught, you know, something that took me many months to unlearn so I could be more effective on my keyboard use. I'm a I'm actually I'm a prog. I'mer Devorak fan. But I have used Linux to uh configure like all of the almost all the keys so that the the third the third level not shift and control about the special al gr key to give me other things that are beneficial for programming and German and Greek and Roman, and however I want to utilize them. Sounds like a lot of work. Well, this is the thing is we're talking about productivity and optimizing your flow. And I find that I type, you know, you with an umloud or a with an umloud, or a dollar sign or the euro sign frequently, and so I want an easy way to type those. I don't want to google euro sign and then copy and paste that somewhere. You know, it's like on your on your phone. Isn't there an emoji key where you can hit emoji and then find the emoji you want? I mean, I I see the LMS a sort of similar tool from that perspective, Right, you're you're hot, You're hot keying over to your warp uh terminal to you know, type those things out and get the answer rather than having to search on the internet. Yeah, but if it's what you're doing, it's what you're doing, and there should be like a productive way to, you know, to accomquish the goal. Look, my five my keyboard layout is open source, it's available. And do you have blank key caps? To Warren, no, I am, so this is not the episode where we talk about my keyboard. I think it's becoming that board. I took a Quirity keyboard. It's the Logitech. I don't even remember what number it is like K four hundred or something. It says on here somewhere. I have no idea what it is. It's their silent version, the one that makes the least amount of sound possible, because I care about noise more than anything else. And then I just moved the keys everywhere I could. And this thing you'll find out about keyboards that are not designing this. The F key and the J key are have a different form factor than all the other keys on the keyboard, so you can't swap them around. I don't know why they do this, just to piss you off a parent, you know, it's like these two keys are going to be different. I don't know why, but they are. And so all the keys on my keyboard are in different spots except for the F and the J they're exactly where they started on the cordy. I think it's because that's like home based, right, Like they're like you want like a tactle wave finding out where those are. But it's the key cap form factor, not like the key itself. So it's like, I don't know. The only the only justication that I can figure out is that if you took all the keys off the keyboard and you're like, oh, where do I put them back? I don't know, Oh, these two have a different form. Maybe the F and the jay goes there, and then I can figure out where the other ones go. And I'm like, that's a pretty suspect. But it's like every keyboard I've seen has this problem. I got a mechanical keyboard once and. My wife made me stop using. She's just like, that is the most absolutely obnoxious, annoying sounding thing that you like, put that away. I don't know want to see that again. I was like, no, that's cool, Like it's like I love the feel of it. And she's like it's like, you know, it's like really, lad, Yeah. I've removed those from my kids Christmas book. She's not there anymore. I'm not doing this. See. I know that that would not work for me because I'm a very angry typer sometimes, like my wife. My wife can figure out like what application I'm using in what I'm doing, but based off of how angrily I'm typing on the keyboard, like when I'm typing a blog post or writing a message in slack somewhere or an email, it sounds different to her, and so I think it's like, how how angry I am? You know? When I'm in an email. It's the exact same thing. If my wife can be like, don't send that, take a breath, don't don't send like I can, I'm like like, and she's like, no, take it fre either don't don't And it's the thing actually is like as a manager, I try to remind myself of of like, don't don't know angry slacks, no angry emails. Oh no, he's typing the manifest Get in the car, kid's get in the car. So maybe, like you know, doesn't Google have like a like a drunk email detection. Maybe what we need is for the keyboard to have like an angry no what We're gonna wait, We're gonna wait fifteen minutes and then we're gonna revisit this and see if you would still like to send it. Look, I feel like, Julian, you haven't tried searching hard enough. I'm sure there's some extension out there for your browser which runs some sort of LM in the background and determines whether your your email has some sort of angry tone to it, and well, we'll prevent you from from sending an email if it contains though no there is. If you use like pro writing aid, it will detect the tone of your email and maybe course correct you a little bit. And I do I do have that. Yeah, you. Hit sand and it comes back and says I didn't send this. But I feel like it's a good time to talk about your feelings. What's the source of this anger for you? Let's get to the bottom of these issues. Speaking of which I think we need to get back to WARP because I have specific questions and more like more future requests. Bring it on. The point of having the app people on the show is that I can be like, if I use this, I have things that I want. All right, Like that. Tell me what can I do for you? So I saw that there's like WARP workflows, and I'm wondering, can I do those in reverse? Like I can? Can I go through and figure something out and then be like, all right, Warp, I'm stupid and I don't remember anything that I just did, but I'm probably gonna have to do this again. So I would like for you to go through my history, figure out what I did, and just go put it in like a markdown file or some notes or something as opposed to like history doing proactively. It's a great idea. We don't quite have that. We have the ability to take command that you've already run and turn into workflow, just so folks know what a workflow is. A workflow is a it's kind of like an alias, but it's like a templated command. And so if you have like a complicated thing you're doing a doctor or like what's your workflow for like cherry picking something into a release, you can make it one of these templated commands, and then we actually make it so it's shareable, which I think is kind of like the killer value of it. And so if you're working on a development team, you can build up a library of these things. That you can use in different situations. So if you're like an. SRI team, it's like, Okay, what are all the commands that I need to be able to run the middle of a firefight. You can have that and they're all sort of in a common library that you have directly within warp. We don't have the feature yet of like intelligently make these for me from a session, but that's a super smart feature. We do have a thing that we're haven't launched, but our experimenting with which is like essentially like run the output of your command through an LLLM and have it summarize for you and pick out the interesting and important parts. But I like your ideagulian of like figure out what I did, record for me so I can do it again smart. So I found that some people have sworn by this ring a chat context session at the end, just tell it to like echo back at you what you did, like say what did I do? And then when it's done, then say, okay, now I want you to take that and write a document for me that includes that information so that the next time I have this problem, I can go and reference that. And with WARP you can say, okay, now turn that into a templated command. You can totally you could totally do that today and WARP the one piece of it that's not like we don't tie the loop of turning it into this specific like executable thing that is a workflow. But you know, we also have like a notebook concept and WARP. So you could be like, hey, LLM and WARP summarize everything I did, turn into a notebook, extract the relevant commands for me, but it's not quite as. Seamless I think as it could be for Jillian. It's a good idea. Yeah, I'd really like to be able to have different I don't know if it's sessions or context, but I suppose one of those where I can say, I don't know. I mean, I suppose for me it would be like client dependent or contact dependent, or even like tell me which environment I'm in, which version of terraform I'm using, like you know, all that kind of for sure, like it's here, it's it's right here. Yeah, well, yeah, so that's what I want. Yeah, we don't like. I think that's a super interesting idea. I mean, you can you can use warpflort anything about your environment today, so you could be like what toolchain am I using? What are my environment variables? Like what? Uh? Anything? You can ask about your history and so you can get some of that today, but you can't quick get we don't have like packaged episode. When you start a new session you can get all that stuff, which would be cool. Well, I would if we're taking peach requests, I would like that. I'm gonna I'm gonna force everyone on our team to. Listen to this, Well, well, you should probably wait until the episode drops and then use uh you know, an I'll m to summarize the episode and extract the future requests from it, and we can do. It that way. Or I think there's been so much interesting discussion about like philosophy, AI and hear that make them all listen to it. I don't think. I don't think it's distilled. Summarized version is going to do a justice. Oh, I totally agree we need human Warren. Yeah, I'm gonna put them in a dark room and play it back to half speed. I don't listen to content any slower than like to these days. Before we get on another tangent, I have this feeling that we should move off onto onto picks. For that, it's probably a good part, good point, good time, good words, look at me, work in my words. Well then well why don't you go first? Right on? Okay, So I'm I have a couple of picks today. One I'm blaming you Warren and Matt from last week because I got the book Dungeon Crawler Carl, and I hate how much I like this book. It's just it's dumb and it's funny and it's entertaining, and it's engaging, and it's sucked way too much of my time last week. So Dungeon Crawler Carl, I can't even remember who the author is. Do you remember Warren? No? I didn't wrack it up. Yeah, just google Dungeon Crawler Carl. It's a stupidly fun book, very entertaining. If you're listening to this episode, the link will be included with the podcast, just you know, down below it, so you don't even have to google. I just click the link right. And then the other pick I'm gonna recommend is if you haven't Zach you mentioned this earlier, if you haven't gone to your favorite AI tool and just started a chat about philosophy with it, I highly recommend that. And that's going to be my pick for the week because it's just it's so much fun to do. And Laurren, I know you said that AI is not intelligent, but neither are some of the people I hang out with. So chatting with AI about philosophy seems to be working out quite well because it's just a really cool perspective of some of the stuff that it has and some of the insights it has to offer, and I've used it for setting goals as well and challenge me challenging me on those goals, and it's been pretty insightful for that. So I think that's one good way to start working with AI. And those are my picks. So Jillian, what about you? What'd you bring this week? I'm going to keep going with the self promotion until I'm back up to the lifestyle with which I've become a custom And if you go to my website, yeah, yeah, that's right, uh dabbleopdevops dot com slash AI. I have a data discovery tool for mostly for data science companies. If you're not a data science company like I don't, I don't like even really know how to talk to you, so maybe just ignore this portion. But the idea is that you get your data, you load it into the LM, and then you can start asking you questions. It kind of acts like a maybe like a junior grad student. You don't want to like completely trust what it says, but it gives you a very good first draft. I'm adding the PubMed interface so you can go search medical literature and say, like, okay, get me all the papers back on this disease or this protein or this drug interaction, you know, whatever the things are. Load that into the LM. Start asking you questions. I've got a couple different data sets open targets, a couple single cell data sets. I want to add a couple of transcripts data sets, even though those might be out of vogue, because they're still cool, you guys, Okay, they're still cool. So anyways, cool things are being added to the platform. Anybody wants it, mostly in the biotech space. Again, if you're not biotech, I don't. I don't really, I don't even know why you're listening to me, Like, just tune me out. It's fine, don't. Don't reduce your you know, your TAM your total adjustable market here. You know, if you don't understand what Gillian's saying, maybe you should go to the website anyway and see if you can figure out a use case for yourself. That's true, you could. I do have some companies that use it just for meeting notes. They there you go like Otter record all of their meetings and then Otter kind of gives them, like, you know, the different summaries and images and things like that. It's pretty cool. And then you can feed that into the LM and have sort of like a just a history of meetings, so then you don't have this, didn't we have a meeting about this? Didn't somebody make a database like wasn't wasn't there a thing? Wasn't there a person we can talk to here? You could just go and query it and then and then it will tell you. Sometimes it gives you the answer you want, and sometimes it's like, no, that conversation never happened. You're hallucinating now, but you know, like it's it's either one, it's one or the other. Well, there's a. Big overlap between bioh hackers and software engineers as well, so that they may find that interesting. Yeah, they could put all the literature and data in there around biohacking that I'm not I'm not totally familiar with, although I am very much looking forward to having like bionic limbs. That would be great, Like that would that would just be. Amazing for me, because because you want you want it that you don't have to think about moving your limbs anymore. You want something else to do it for you, right, No. I just want limbs that work at this point, Like that would be nice that's that's like just on a mechanical level, like that's what I need. And then you know, and then on that note, we're kind of talking about like philosophy of AI and so on, you know, and we can kind of argue about the tools, but from an accessibility viewpoint, AI is really great and doing some really great things. You know, like I have some issues with typing as I age out of this career field. You know, I have some like low vision people in the family that AI is very helpful for them being able to dictate, being able. You know, there's like there is of a lot of cool accessibility things that can be done with AI, and I do always kind of like to give a little bit of a shout out to that because I do think it's like all of that is pretty great. You know, Like I have somebody who's low vision who can now listen to audiobooks and you know, I'm basically like kind of still go through the Internet just with voice, and I think that's pretty cool. So I don't know, that's it. That's my picks, alrighty. Then, so for my pick this week, I primed it at the beginning. It's this Microsoft backed research paper that came out of Carnegie mellon the impact of Generative AI on Critical thinking, and I think it's just absolutely fantastic paper about the correlations between utilizing AI tools and developing critical thinking processes and expanding in usage of that sort of brain muscle. And I think some people have misinterpreted the paper as Microsoft paper on AI is making us stupid, But I think the one thing that really does come out of it is that if you have low confidence in an LM doing the right thing, you will be able to get much better answers out than if you have high confidence in the current tools that we have, because the current tools are transformer networks that hallucinate, and if you just assume that it gives you the right answer, like your calculator, you are going to stop developing the muscle of challenging where you got the information from and trying to understand it. I will say that this leads me to a great interview question. I know that interviewing candidates today can be challenging because they may be using lms to answer your questions, and for me, I think that naturally you can just ask them, hey, are you like how much confidence do you have in the LMS that you use to produce the right answers. The more confident they are, the more likely you know they're not using critical thinking to challenge what comes out of them and could be a useful litmus test for what sort of person you're hiring into your organization. Right hunh, And so by are you phrasing the question that way? Just presuming that they are using AI to make them more comfortable with admitting that they are trying to hide it? Well, I think realistically part of our interviews now should be dedicated to solving problems that don't rely on using LMS, or problems that can use LLMS to be solved better, and then asking them to use lms and which LMS they're utilizing to solve the problem and how they're going about it, because I think where you know you're trying to hide this perspective from yourself, you're lying to yourself if you believe that they you don't want to pull these tools into your company to utilize in some fashion and that people aren't utilizing them irrelevant if you give them a take home assignment for your company that takes four hours or eight hours, some of them are going to utilize tools, And I don't think it says a lot on the type of person based on whether they utilize the tools, but it does say something about them about how they're utilizing them or what their expectations are on how they utilize those tools. Cool, all right, Zach? What'd you bring for? Pick? I have a tool that I like. Why not? So it's a tool called Granola, and it's a it's a note taker. It's a meeting note taker, you say, I. But the thing that I like about it compared to all the other ones that I've tried using, is that you don't end up with like a little like black box in your zoom for the note taker. The note taker works just off of your computer audio. Oh so there's no like this is weird? Who is this? Like Zach's note taker thing? Joining the meeting? And it it not? Is it takes notes. It doesn't like the default way that it takes notes isn't by transcription. It's by. Like semantically summarizing and giving you the key points. Of what happened in the meaning. So I don't like taking mening notes. So this is a cool thing. It's called pole. That's one thing. A second thing I'm reading a book. It's pretty nerdy. I don't know why I'm reading it. It's called A it's like a travel Guys in the Middle Ages, and it's all it's like a it's a history book, and it's all about like from like you know, the year like eleven hundred to fifteen hundred, how did people travel, Like what was it like for them to take a vacation. They weren't really taking vacations. They were like primarily going on pilgrimages or at least that's like what the written record survives. And it takes you all over Europe. The Middle East and like the Near East, and like I'm not through it yet, so I don't totally know where. But to me, it's what I like about it from a. History perspective is that it's just about like it's about a like relatable experience, not about like a series of historical events. It's not about historical leaders. It's about, like, say, you having to be living in the year thirteen hundred, what the heck were you doing? How did you pack? How did you travel? Where did you stay? Like what were the inns? Like? What were you trying to go? Sight see? At I don't know why I like it so much, but it's it's like I really like it. It's like a puts me in a very different mindset from like how we're living today, So. That's super cool. It's like National Lampoon's Middle Ages vacation. Yeah, except I guess it didn't seem like very funny to be traveling then. It was a lot of like very serious. You got to get to this religious site, like like you got to see these relics, like people were really wanting to see a bunch of you know, historic felts or at least that's what the written record, uh survives, and that's where the history comes from. So that's pretty cool. I used to really like all those, like the the diary type books, like their fiction, but they're sort of written as diaries of like the kids that would do the Oregon Trail and travel across the United States, and and they're they're from like other places as well too, So you have people coming to Plymouth Rock and doing the Oregon Trail and just the sort of Yeah, in general, people go in different places, like across history. It used to be a lot harder. You used to have to worry about more things than like if the gas station have your preferred chicken tenders or like whatever you know. Yeah, there's so many questions, Jill Anne. Awesome, Zach, thank you for joining us. This has been a super entertaining episode. Yeah, this has been fun. Yeah that was great. I was super interesting conversation and like, uh it was Yeah, that's fine. Really appreciate you all having me on here. For sure. I'm gonna challenge Jillian to go download Warp try it out, and then invite you back on the show for a head to head rematch. Voice like that's the one thing I really want. So there we go. It's a I don't know if it has it's at warp dot dev is where you get it, and it's now available Mac, Linux, and Windows, so all platforms right on. Cool cool, Well, thanks everyone, Zach, thank you again, Warren, Jillian, thank you, and we'll see everyone next week.