speaker-0 (00:07.82) Welcome back to Adventures in DevOps. For this episode, we're going to do a deep dive with serverless. So we've invited founder and long time cloud solutions architect, Lena Furman. Welcome to the show. You've done it all, UX designer, web designer, full stack, cloud engineer, and now cloud architect. But as founder, that probably also means customer meetings, user research, and I'm sure training. You can't get away from that, huh? speaker-1 (00:32.312) Thanks, Florian. Thanks for having me. Yeah, it's kind of I've done all of the above. I love some of them, some not so much, but they're all part of my daily life. And I think in general, see myself as a generalist. I can do many things well, but none of them really great. So yeah, that's really where I find my strength. think I would also get bored otherwise if I would just really only dive into one topic and then. become the best expert in the world in that topic. speaker-0 (01:04.698) I mean, it really is so many things, but I feel like you are the founder at Baspinian, which is a cloud consultancy, is that right? And I assume that your customers have so many different requirements or challenges that you need to sort of work your way through. And I like the perspective that you've done all these things, but you don't like all of them. So which ones do you not like? speaker-1 (01:11.182) Yeah, correct. speaker-1 (01:26.242) So the technical topics are usually the one that we are very passionate about that we jump onto. But usually, you know, it starts way before that with the human challenges, with the human problems. And I like that you used the word challenges because that's right, usually makes sense to start. It doesn't make sense to already think about the technical solution to the problem that we have had in our mind from the very beginning. And I hope I have to hold myself back sometimes to not just do that from the get go. like to just get into the project and already have in the back of my mind that serverless is going to be the solution for this company, for this team, for this project, for this problem. So yeah, I think that's what I hate the most, that I have to hold myself back and not jump into the technical problems or technical topics from the very beginning. speaker-0 (02:17.816) So you really focus in the serverless domain. think anyone who knows you knows you're all about the serverless. And I think before we get into sort of the customer conversations, the question I think I need to ask you most is, in the age of AI, is serverless over? speaker-1 (02:31.438) I kind of expected a question like this. And obviously the answer is it depends because I'm a consultant. No, think obviously serverless has a lot of potential also for AI. I think there is a big confusion that people, when they hear the word serverless, they directly think about function as a service. They think about AWS Lambda, Google Cloud Functions, Azure Functions and so on. And I think serverless covers a lot more topics and a lot more services that are offered by modern day infrastructure providers, cloud providers, whatever you want to call them. And there are serverless services that cover the AI area where you can have basically serverless inference or serverless training, or even consume the AI itself just through an API, which is also kind of serverless, right? Serverless basically just means that there are no servers involved for you as the consumer. Obviously, there are still servers in the back end. There will always be servers. But you as the user, as the consumer, you don't have to concern yourself with those servers. And for me, that's the definition of serverless. It's not to jump directly to function as a service. speaker-0 (03:55.612) no, I absolutely agree. I think it's one of those areas where there's a lot of people that stand up on principle and believe that serverless has to be a particular thing. based off of that, they either decide it's the best thing ever or it's literally the worst and they won't go anywhere near it. speaker-1 (04:11.616) Exactly. it's also like everything in IT, it's not a silver bullet. are, you know, upsides and downsides to it. I quite like this image of, you know, the abstraction ladder where if you climb up the abstraction ladder, then you have to do fewer things. There is less that you have to think about and concern yourself with, but there is also less that you can do. So you have fewer possibilities. You have less flexibility and This can be applied to AI and to other topics, to compute or databases, file storage, blob storage, whatever. But usually you have to place yourself somewhere on this abstraction ladder and it's how much control can you and are you willing to give away? speaker-0 (04:58.156) Yeah, mean, you're arguably constraining yourself with lack of fine-grained configuration, whereas the most configuration you would have is if you were on barren metal. I think maybe the question is, especially because you're in the consulting space, how do you decide that serverless is right for all of your customers when, I mean, we know it's the right answer, but how long does it take to convince them that that's actually what they wanted all along? speaker-1 (05:21.358) It really depends on the people. That's what I mentioned at the beginning that usually you cannot start with the technology. You have to start with the people. You have to understand their problems. You have to understand where they come from, what their background is. And as I said, it's also not always the right solution. Usually companies don't have a full on serverless strategy or not. It's not a binary choice usually. They have cases where maybe there are able from a compliance point of view, from a technical point of view, to give away the control, to give some control to the platform provider or the serverless provider. But there are cases where they might not be able to do that because the data is just so sensitive, because it's important for them to have the sovereignty not to do it. It also always comes with a higher lock-in if you climb up the abstraction ladder. So usually it's more of... Is this use case suitable for serverless and not is this company or is this team suitable for serverless? For me, this also goes a bit into this discussion of microservices or not. It's also not a binary choice there. Usually, it can make sense to start out with a big monolith and then break out little parts that make sense, that do have the requirements to be broken out, that have maybe their own lifecycle or stuff like that. These are all, I think, very similar discussions that you need to have with customers. speaker-0 (06:53.39) I totally agree and you know it's interesting you brought up the I think it's on our secret bingo card here microservices versus model it that I think we want guests to sort of talk about I'm still planning this episode where we where we have a heated debate over one architecture I hate to even call them paradigm versus another one I'm it's sort of hard to separate it with serverless because there is this aspect of with serverless that if you don't control the servers and you limit the configuration that likely you may also want to limit the scope of each individual function in a way. think one of the heated debates even bring it closer to home in the serverless realm is do you try to still shove every all the business logic for a particular service into a single function or unit container or do you segregated across some sort of business lines or functionality? Is there a limit? I don't know if there's one overarching sort of strategy that claims it to be the best practice, but I've seen a lot of people on both sides of the debate. Where do you land on that? speaker-1 (07:58.146) So when I started out doing serverless, that was even before I founded Best Binion. And it was at a startup that started completely from scratch with their infrastructure. And we wanted to do everything serverless. were a very small team. We didn't have the capacity to take care of servers and neither did we have the financial means to do so. And we fell into this trap. So we started building this huge landscape of like 60 different functions that were all kind of their own little nano service, I call them. And it was crazy. We couldn't really manage them well after time because luckily they were all written in Go, so they were all kind of similar, but they all had their own life cycle. They all had their own dependencies that could come up with CVEs and stuff like that. When we upgraded the Go version, was a huge endeavor. It was a huge task. So right off the bat, I completely started into one extreme of this debate. And I think that's almost never good, it's almost never the right answer. And we had to learn it the hard way. We then tried to, after a while, consolidate different functions from nanoservices into microservices. You know, we did all of the domain-driven design workshops, event storming, all the good stuff. And in the end, we ended up with, I think it was around six microservices that were all each their own function. And I think that was a much more sane place to be in the end. And I like to follow this approach that I mentioned before, that you start with one big thing and then when there are different requirements for parts of it, then you start breaking them out one after the other. And when I say requirements, this can be, you it's a different team maintaining this part. Does this part have different requirements towards resources, you know, compute? storage memory, stuff like that. Does it have its own life cycle? Stuff like that can then make sense for parts to be broken out. But to do it just from the get-go, think it's most of the time is really an overkill and it will lead to much more complexity than you would have initially had. speaker-0 (10:15.086) I think this cannot be overstated and you called it out. Like look at the interface for the control plane, for the configuration management of your individual server lists. if we're talking about like cloud functions and GCP or Lambda and AWS, you've got things like memory and timeout for the most part, maybe runtime and architecture like AMD or ARM, you know, x86, 64 as the options. And that's it. There's not a lot else. Maybe it's the total amount of hard drive space or temporary storage space. If there's a difference between that, then you sort of have to have a distinction in what you're deploying because you can't have both two different values there for the same thing. You have to deploy separate pieces there. You know, trying not to get into the, you know, whether or not microservices is appropriate or not. I think the one thing I do like about what you said here is that And maybe the question in the back of my mind is when you are using a serverless technology, there's two things that come up. Number one, I feel like the cloud providers really try to push a lot into segregation, into building, I even hesitate to call them nano services. What's smaller than a nano? Like an Angstrom, Angstrom services. All the documentation, lots of the articles out there on the internet always suggest like, yeah, just throw another function as a service. instance at that technology and so they lend themselves to going in this direction. But I think my experience mirrors your own, which is, well, have the right level of separation, whatever that is. It's like, I don't know how many it is, but it's three things or four things or whatever a thing is in this context. But the cloud providers don't come up with that argument to start. And then there's a lot of debate within the industry of how to approach these things and you end up with people on both sides. I like the way you've broken it down though. The one retort I think I have is, and it's the same thing that Sam Newman brings up in building microservices, is that you start with the monolith and then say, break it apart. However, in practice, most large organizations I see who have monoliths cannot break it apart. They have tried many times and failed. speaker-1 (12:33.954) Yes, I fully agree. I've seen that before. And I think this comes down to that. Even if you start out with a monolith, then you still have to do your architecture right. You still have to create an architecture that allows you to break out those parts. And usually what works quite well is, you know, this clean architecture, hexagonal architecture, where you have a clear separation between the core, the business logic of what you do. And you have clearly defined interfaces. And I think that's something that For example, language like Go really facilitates and helps you to go in this direction. But this doesn't even have to do much with serverless, in my opinion. But if you don't do this, your due diligence, when it comes to architecture and prepare your monolith for stuff being broken out, then obviously you're going to run into problems and you're going to hesitate in breaking those things out because it's just so much work or because your engineers are afraid of... breaking things that they don't have anything to do with, rightfully so in the most cases. yeah, just having microservices or nano services, angstrom services, it doesn't prevent you from having to do your architectures right. It actually makes it much more important that you do so. And that is something that I think when it comes to these requirements, I also work a lot with containers, with Kubernetes and these kinds of things. And to me, when I look at serverless, Usually the requirements that serverless applications have towards architecture are basically the same as the traditional cloud native service, but it's much more important that you actually follow them. When you run your application on Kubernetes, maybe you can be forgiven if you store something on the file system or something like that because your application will not restart for months and months. But when you're running something on a function as a service platform, then it's going to restart every couple of minutes or hours. So you're going to run into the problems much more quickly and it's going to be much more severe if you make those mistakes. speaker-0 (14:39.554) See? I have, like, I think the exact same scenario and I look at it from the opposite perspective. You are saved from the grievous mistake you've made if you pick a pure serverless solution because it reminds you immediately how dumb of the thing you did in trying to use the hardware directly in a way that wasn't prepared. Like it's ephemeral. Whereas if you try to build a whole queue system that's dependent on a database that's ephemeral and you're running in Kubernetes, you're much more likely to get six months out the door. and then find out that you've caused a huge production incident of which there's no recovery. Well, no, technical recovery. Of course, there's still a business recovery, which is usually an email saying, oops, we messed up. We promise to take a serious look at our architecture and do better in the future. Whether or not that's coupled with actually doing those things is a separate question. But I think this is sort of weird. There's a weird crux for me where if you use serverless in the way that it's described, it forces you to think about those things. It's not forcing you to think about things that you wouldn't have had to think about. It's forcing you to think about things that you would have had to think about but decided not to because you didn't know or you thought you knew better and going down that approach. So from my standpoint, I see it as a safety mechanism rather than or rather than a pit of failure where you're forced to spend time and resources and effort learning about something that you may not have wanted to otherwise. speaker-1 (16:06.926) Absolutely. Yeah, that's, I also always recommend people to run at least two replicas of their service on Kubernetes, not just because it's going to be more stable for them and more production ready, but again, exactly because of the reason you just stated, it forces them to not have shared, to not store stuff in memory that would have need to be shared between those services and all these kinds of things. just lets you run into those traps much earlier on. speaker-0 (16:34.982) So for these customers that haven't fully come to terms yet that they're going to be migrating to serverless because they haven't even thought they haven't thought of it or they did think of it and they just hadn't met the spinny in yet. What is your what is your best argument for the migration? Is there one that you end up going to more frequently than others that you're just like here it is and then bam they migrate over or do you feel like it's a it's a challenge every time or is there something else going on? speaker-1 (17:01.998) So all of the above. It's usually a challenge because there are completely new paradigms for people that they need to learn. It's completely new ways of how they deploy and write their applications. I think the one argument that mostly convinces people is cost. That's probably true for many different areas in IT and in business in general. But I don't mean cost directly. So let me explain. think about cost, what you think about initially is, you know, what is on your AWS bill or what is on your Google Cloud bill or what is in your Azure bill. And because that is the most tangible cost that is, you know, you have one bill, it has one number down to the cent, you know, exactly where your costs come from and so on. So that makes these costs very tangible, but usually those are the least of your costs. Usually your most the most of where your costs come from are your engineers. They're usually much more expensive than your cloud provider bill or even the physical hardware that you might have. I have this talk that I've held where I ask, is serverless a blatant cost trap? And obviously at first in this talk, also I focus on the cost of the infrastructure and of the platform that you use and that you can save a couple of dollars there if you do it right. Like running a server constantly may cost you, I don't know, a hundred bucks a month or something like that. And with serverless, you might get that down to five or zero bucks. But usually the real money that you save is when you make your engineers more productive and when you have them spend their time on things that actually matter to their customers. Because when, like with my engineers, when they patch my operating system of the, or the operating system of my server where my application runs, my customer isn't really going to directly notice that. They're not going to directly care about that. Whereas if I have my application, the engineers or developers bust out new features or fix bugs, that's what is going to generate real business value. That's what my customers are going to care about. So the more I can shift the focus of my engineers from doing infrastructure work that speaker-1 (19:26.666) I could otherwise delegate to an infrastructure provider and get them to really do feature development and stuff that is a bit more higher level that my customers care about, the more money I can save there because then I can use my engineers, my most expensive resource, much, much more efficiently. speaker-0 (19:46.326) What do you think when when you get customers on board and you're trying to sell them on, you know what you can offer them is like cost the number one thing or are they coming to you with like performance concerns or with like unmanageability of what they currently have? Where is like the biggest focus? speaker-1 (20:03.786) It really depends on whom you talk to. So we usually have two ways of approaching our customers. It's either we go through the technical folk that we meet at meetups, conferences, stuff like that. And they're usually quite passionate about the technology, about optimizing performance and all these things. And the other approach is obviously through the business side, through the decision makers. that are not so easily found at a cloud native meetup or a Kubernetes conference or something like that. And there, think for those people, the concerns are usually the ones that I mentioned before. They are talking about cost. They are talking about how can we serve our customers more? How can we be more flexible in the market and turn out features more quickly and stuff like that? But when you talk to the technical people, which I could also identify and relate a bit more to usually. So then I think it's a lot about passion also. These people are very passionate about what they built and they're very passionate about this optimizing performance, these topics. They're passionate about the technologies that they use and you have to play your cards really well if you want to convince them that new technologies, other technologies might be more efficient and might benefit them. not just their company, also them as people in the long term. And that's something I think that we usually forget that we're still talking to people who have their own pride, they have their own passion, and they have their own fears also. They might even be afraid of new technologies because it might make them obsolete, or at least they might be afraid of that. So I think, yeah, when we talk to the technical people, these are all things that we need to take into account. speaker-0 (21:57.912) think I'm going to misquote the year here. think AWS came out with like Lambda in 2014 or something, or at least that was the moment when my team decided that it was the opportunity to actually investigate and there was no shortage of conversations, philosophical mostly, about how to actually utilize this technology to use one Lambda per endpoint. How does the cost work? Is it actually viable for us? I think literally every conversation, what language do we use because we weren't running? in JavaScript at the time or Python, we were using C sharp and I don't think there was a runtime for that. But when it did come out, it was like, it's the slowest next to Java, which was always, it's like, well, it's not the slowest. So we could be doing worse. And I think there is a huge argument behind that. Like why not to use a particular technology because it doesn't match up well with. what you have going on as an organization and doesn't necessarily drive the conversation to change what technology you're beholden to. So really, rather the question is can we use serverless? That's not the right question. The question is, what do we have to do in order to start using serverless? And then you're really asking the question, who do you want to be as a software engineer? And then people attach their identity to like, I'm going to... maybe make a little bit of a jest towards our last week's guest, who's a Haskell-er. I feel bad. Try to make Haskell run on Lambda. You're not in for a good time right now. Maybe in a couple years from now, you'll be in for a good time. But right now, it's a struggle. You try to use Java on Lambda up until a couple years ago. There was no snapshotting. Incredibly expensive to run. And maybe that's something we can get into a little bit. But I think there really is an aspect here where You have to convince the engineers. I guess, you know, my, my trauma is showing from, from, my past. Uh, and so maybe I want to ask you about this. When you, when you get in the weeds, are there any interesting stories that just like immediately come into your mind? I know you keep bringing up the cost and the level of abstraction. Do you, were there like stories of training or stories of, I mean, you sort of looked in the distance, so maybe there is something that particularly came to mind. speaker-1 (24:13.966) Got me. Yeah, there is a so there is one back when when we were still talking about the transition from, you traditional VMs to servers. There was one when we when we talked to a customer and tried to pitch this idea of using Kubernetes and containers to them. And we tried to explain to them what are the benefits of having these immutable images. And the person was really not happy about hearing all of this. And they were like, You know, I just want a container that I can SSH into and install my stuff. So how could I get this? And we were like, okay, we have to take a step back and start a bit earlier and explain that to them again. And that then led to a very interesting conversation because a lot of the learnings that we apply in the container world or even the serverless world, like this immutable images and immutable infrastructure thought. we can actually also apply to VMs. So that's sometimes how you can get people on board and how we did it back then. So we used, back then I think it was Packer and Ansible to create an immutable infrastructure, sorry, an immutable image that we could then use on VMs. And we could sort of, that would help the transition with this person because they could still use their beloved VMs, but they would see what benefits this immutable image would bring to them and it would introduce to them very slowly this new way of deploying and this new way of working. And then to take the step from there to containers was a much smaller one. speaker-0 (25:53.39) and you have to show them the pain of their current setup and then they'll easily transition. I feel like it's the, well, you don't have to manage infrastructure. You don't have to configure these things. You don't have to worry about it. Logs, they just show up automatically in a third party service or interface that you can just go to and they're all right there. You don't need to configure any sort of interaction layer, any sort of logging driver, anything like that. And I'm like, oh, but I want that. And then you go and ask them, okay, I need you to list out to me all the pains that you're having. Well, getting logging working is a real challenge. Sometimes the container crashes and I don't, the VM crashes, I don't know what's going on or a times out or the CPU usage is too high. we have to deploy these side cars to, know, or agents onto the machine that keep track of all of the metrics and then understanding like what to alert on and if the alerts are good and then, and then patching it is terrible. And I'm like, how can you say no to serverless and then say at the same time, you hate all these other things? Like it does not compute. speaker-1 (26:52.75) Yes, it's all about the why in the end. You need to convince people of why they're doing this switch and not just how they will do it or what the switch will consist of and what are the different steps to get there. It's like, why do we do this switch? What are the benefits that your company but also you as a person will get from this? speaker-0 (27:15.862) I remember, this was a long time ago actually, I was working at a company on a team that, it was called Integration, but now I'm sure it would be called Platform Engineering or some other mistake name. And we were sort of in charge of the infrastructure for all the environments, including production. Interesting story though, in production we weren't allowed to actually SSH in, although of course since we were using Microsoft, at the time it wasn't SSH, was some... PowerShell derivative to actually get onto the machines. And we needed to get access to deploy a piece of architecture that would allow automatic deployments in the future with the new technology we're rolling out. And there were like hundreds of machines. And it was a very simple thing we needed to do, basically install, like run this one script for each of these places that the script has been really well tested. Of course, there are machines that are not exactly configured the same, even though they're supposed to be exactly. It's like the almost exactly is the real struggle there. Especially on Windows machines where there's like some C++ redistributable or the .NET installed version. Like it's supposed to be four, four five or whatever, but for whatever reason, it doesn't have an old version of something on it. And one of the engineers on our team said, I'll take that activity. Now this was before LLMs. we didn't, I don't know how that would have helped. Maybe someone can jump up and down and say, they could figure it out. Literally went in and connected manually to each of the hundreds of machines there were and ran the script one by one for each of them and that's what he did the whole day and I didn't find out to the end of the day that this was done and I'm just like Mind blown like at what point did it feel like this was the right thing to do ever like if there was five Maybe I'd get it but like literally sent the whole day I'm like this could have been like a for loop that that's it like nothing more complicated than that to run it He enjoyed it, thought it was the right thing. So I think it really says that there is this aspect of mindset that really goes into the type of engine that expects that there's a certain flow that still works. speaker-1 (29:21.08) Yeah, I love that you said he enjoyed it because I think that again comes back to this that you also need to take the human into the equation, right? I think that's also a big topic when it comes to LLMs these days. Because these LLMs can make us so much more efficient when we're doing engineering, we can turn out much more code very quickly. But I think LLMs can also take the joy and a bit of the relaxation that we get sometimes out of writing code. Like, at least for me, doing the conceptual work, doing the meetings to get to the architecture and so on, this is very interesting, but it's also quite challenging. And to me, like 10 years ago, there was a very good balance between this and then the actual writing code, which controversial opinion can actually be quite dumb, quite a dumb job to do. Because if you have very clearly specified requirements. then writing those down into code is pretty straightforward. That's why LLMs are quite good at this, right? And if LLMs now take away this part of the work that we used to use for more relaxing, for more dumb work where we can turn our heads off, where we can get into the zone and get tunnel vision and just implement the feature, I think this can be quite challenging for people, including me, that you now... Your work nowadays consists of much more of the conceptual work and of the work where you really have to actively turn your brain on and you don't get so much of this more rewarding work. And I guess this person who ran the script hundreds of times, they had the same feeling with every VM or every machine they did. They got another reward in their brain and they had another success for themselves. Yeah, I think this shouldn't be underestimated sometimes. speaker-0 (31:14.222) So it's an interesting perspective that the meaningless, I don't want to say that, menial tasks of software development are actually required for stimulating us or giving us the freedom to continue to think about the challenging architecture or conceptual ones. speaker-1 (31:33.11) Even for our mental sanity, I would say. speaker-0 (31:36.91) So one of the points of mental sanity that I think you probably secretly hoped I wouldn't ask about but can't be left out of the episode is all of the challenges for why serverless can't be used and that falls into the dreaded category of performance also known as cold starts. So what are your perspectives there? speaker-1 (31:57.39) So if cold starts are a problem for your application, then function as a service is obviously not the solution for you. The solution can still be serverless. So that comes back down to this, what I mentioned initially that, and during our conversation, even us, fell into this trap of we talked about serverless, or sorry, we talked about function as a service, but we always just used the word serverless. I think even if you have a problem with Cold Start, serverless can still be a good solution. You can still use a serverless container platform or stuff like that. But obviously Cold Starts are very real and they're a very important thing to consider. There are ways of optimizing them depending on the CPU architecture that you use, depending on the programming language, the framework that you use, depending on how you set up your application. And usually if you want to use function as a service, it's good if you take a stack back and think about how you can make your architecture more event-based, maybe even more asynchronous, so that the cold starts aren't that much of a problem for you anymore. And if that's something that you are not willing to do or that is not efficient for you to do, then choose something else, by all means. speaker-0 (33:14.094) I think that the challenge there is people coming on board saying, oh, I have to completely rethink how we design technology. And I want to be in the space where I'm fully in control of these levers that I have been in control of the whole time, where now I have to actually take into account this critical potential problem. But on the flip side, I really like the perspective because I think honestly, most of the time it is totally the truth. Whereas it likely has a meaningless impact on the whole architecture if redesigned with that constraint in mind. There are ways, as you put it, if I just think about AWS, there's both provision and manage capacity options, pre-purchase, et cetera, as well as ways of maintaining warm containers. And so even in performance critical applications, there's ways of getting around that, even without doing anything extra. And then on top of that, can of course reduce that by doing little unnecessary or complex tricks. I've seen some out there like Lambda automatic warmers that come in and call your API unnecessarily to make sure that you don't have any containers down and they do it every 15 minutes or so, so that the container will always be warm, but not cost you as much. you know, honestly, and it still comes out to be cheaper than the alternative. I don't know if it's a good idea because You're increasing the complexity, is something that you sort of promised to eliminate and you brought up cost, the total cost of ownership that is, with managing a solution. By throwing these complex things in, you're sort of defeating the whole value that was offered there in the original hypothesis of switching. speaker-1 (34:52.237) Yeah, summarized it so well. think usually these workarounds, they come down to ego or to passion, which sometimes go along. But yeah, I think they come from this passion that we have to really optimize for performance and for costs and really bring down the cloud provider bill that I will see at the end of the month. But the actual costs that I have to spend more time of my engineers or that my users will have a worse experience, they're usually not worth these workarounds. speaker-0 (35:29.972) I think I'm going to short circuit what you say and totally take it out of context and interpret it as passion is the root cause of all incorrect architecture decisions. speaker-0 (35:41.716) I think it actually is in many cases. speaker-1 (35:44.014) No. speaker-0 (35:46.638) I mean, there is something to be said there. I think, you know, once you start the ball going down one path, it's hard to really look anywhere else. It's sort of this tunnel vision in one perspective, and then you attach an emotional aspect to it as well that you want to keep that decision. And then maybe coupled with another aspect of aligning your self-identity with whatever that is, like if you're a... Microsoft MVP or AWS hero and now you're looking at this at particular angle and you're like, well, I don't want to switch clouds or I want to use the technology that's available. I don't want to switch this other thing. Or if you're some expert in some, I'll say legacy technology. The goal, the idea of switching away from that to something that may be better suited is sort of an attack on how you envision yourself. I do see a lot of engineers. have this challenge, feel like, when I talk to them about serverless, they feel like, no, it could never work. And part of it, I think, is their own definition of it. And so I like how you broadened it in this conversation. Like, I wouldn't normally have looped, say, containers, like direct containers, things that match the OCI protocol or Kubernetes pods into serverless initially. I wouldn't have normally included them there. But if you throw that into the definition and you separate it from functions as a service, I feel like in a way you are tricking people into believing that they've been on serverless all along and now switching from one platform to another platform is an easier said activity. speaker-1 (37:19.288) Yeah, then they don't need to rethink what they have been doing for years and, as you said, re-identify their identity. speaker-0 (37:32.566) Yeah, it does seem like the biggest concern is from a fundamental lack of understanding. One of the areas that I think has still a huge opportunity for improvement over time and maybe you have a perspective on this is historically, I used to say that there are certain technologies that are way better for using to do certain things. And the arguments that I had say even against Kubernetes early on compared to functions as a service or some other extreme aspects of serverless was whether or not you needed access to GPU, better control over the infrastructure that you're actually getting under the hood. Like if you're deploying on virtual machines proper or even bare metal, you had additional capabilities. But I think that over time, we are actually seeing a migration in the hyperscalers and other cloud providers offering the same configuration and same complexity to functions as a service, to all aspects of serverless. that sort of defeat any sort of justification. And so when I look at posts from on the internet, we moved back to bare metal and everything is cheaper because we can manage it so much better. I think to myself, they moved away from serverless today based off of what serverless offered, but they didn't account for the fact that serverless is a migrating concept and that it will grow in understanding and complexity to match the interface that is best optimal for the consumers. software developer experience, if you will. And as that changes, you may get more and more access to exactly the interface that is optimal without having to worry about all the complexities that come with why you migrated off of bare metal or virtual machines in the first place. I don't know. Maybe this is just my own just rant in the area, very short-sighted that I just can't get out of my head though. speaker-1 (39:19.63) Yeah, but I think there's something that we see quite a lot these days that people go back. We see all these blog posts of people going back to the good old days and when they used to just have that one server. And I think there was recently a post on Hacker News where somebody hosted multiple of their startups on just one server each with a SQLite database and probably one container or even just a systemd service. And, and yeah, I think it also again comes back to us being human and that it's, it can be something that is also much more fun to play with because you have more control and then it allows you to, to then, you know, delve into topics like even Linux and how you set up a proper firewall on Linux and stuff like that, which actually I also miss doing sometimes, like I'm running Linux on my own laptop because that's where I actually get to. play around with it and use it and dive deep into it. Whereas if I'm just setting up Linux on a container or even using function as a service, then it's all abstracted away from me. I also do like this idea of going back to the roots and actually learning again about those topics. speaker-0 (40:40.044) You know, there are some aspects that I really want to agree with and then there are other ones where it's just like Linux is the worst operating system except for all the other ones. I get so scared. So I don't like do a super lot of software development on my machine, but it's like for the podcast, I released the website and the episodes. speaker-1 (40:49.006) I love that speaker-0 (41:02.702) and there's some automation involved and I'm installing some packages and every day I'm just like, I got to run some package manager. There's going to be some vulnerability like that's in there. Like it's just a number of days until I have some malware on my machine and I'm trying really hard not to, but I am just scared that that is just an inevitability in so many ways. And I, when I can, I'm just like, you know what? I prefer just to push this directly to production, not think about it, not install anything and it will just work. And if it doesn't, the build will fail. It will tell me about it. Stuff will run. There's no patches. There's no weird stuff. And I don't have to be scared of my operating system. And then the here's like, no, I love this. This is my favorite part. love, I love being afraid of like what malicious services are running on my machine and connecting to the internet. And you have like a wire guard up or like a peer guardian or something that's like monitoring every single IPv4 and IPv6. you know, connection that's being created. Be like, I know what that is. That's fine. That one, that's not okay. And I just can't imagine managing that. speaker-1 (42:06.67) Yeah, I get that totally. And I think it's also, you know, a bit of a world of unicorns and rainbows sometimes. Because, you know, when I talk about my personal laptop, for example, or my home lab that I have running at home on my Raspberry Pi, if stuff goes bad there, sometimes, you know, it's just my spouse who's angry about it. But if stuff goes bad at my company in production, then I need to actually get up at 2am and fix that stuff. And I think There, very quickly, will start to then not have so much fun anymore with playing with all these little things. So there are also different use cases and different worlds where one can be fun and the other one not so much. speaker-0 (42:52.622) There's definitely the Daniel Pink drive where he talks about motivation 3.0 of if you pay someone to do an activity, it has an opportunity to steal the enjoyment you get away from that. But there is definitely the, if there's stress associated with the thing that you're doing, then that also has the opportunity to steal your enjoyment and motivation away from it. So I have to ask. As a consultant, you get to have unique opinions on cloud providers, given you work with different customers, and you have seen some complex setups that probably go quite wrong with them. Do you have anything that particularly comes to mind when, if you get a preference of where to build, do you like cloud functions, Lambda, Kubernetes in AWS, Cloudflare workers? Do you have like, you know, this is the thing that you like the most? speaker-1 (43:42.008) So one of my favorite setups is actually based on Lambda. And I really like, I think for almost every programming language that is supported by AWS Lambda is there is a kind of a shim or framework that allows you to just build a regular web app, like be that an express app when you're using TypeScript or just a regular Go HTTP server or whatever. And it just allows you to add couple of lines of code to then wrap this initial web server that you have created and run this web server exactly as is in AWS Lambda, or I think it also exists for other function as a service platforms. And I really love this setup because you get all of the advantages of being able to develop it locally on your computer with the tools that you use and love. You can start your server with one command and you don't need anything, you know, like local stack or something like that. to run your application. But then you can, with one command, deploy it to the cloud. You have all of the benefits of Function as a Service, like integrated tracing and integrated tooling and all these things. You have the cost benefits that you see on your cloud bill at the end of the month. You have the almost infinite scale and all of these good things. So for me, this is sort of the best of both worlds that I have seen. And I love the setup. And it also really easily allows you to do what we discussed earlier, that you build this initial monolith of a web application initially, and then you break out parts of it later on. speaker-0 (45:21.542) She's really the official spokesperson for serverless. You nailed down all of the benefits and eliminated all of the negative possible concerns there. I think one thing we didn't really talk about was the improved developer experience. I mean, I know there's the whole management cost, the total cost of ownership I think we did get into. But I think there is a really interesting thing here where one of the huge pushbacks that I have seen in the serverless space is people saying, oh, it's too difficult to spin up a service. I need to use some bloated extra technology to run. And I always am super confused there because all you need to do is literally start this application as you would any other process on your machine, as if you hit like F5 or seven or whatever your keyboard shortcut is to run. And I don't know if it's a disconnect that I'm not seeing or whether or not it's just like not putting the pieces together that it's really just so easy. to run serverless technologies locally. speaker-1 (46:19.372) Yeah, I think it's also a bit connected to usually when you change something in your application, then you don't just do serverless, right? When you're changing things, then you're changing probably more things than just putting, taking your application as is and putting it into, onto a serverless platform. And I think it's usually this complexity that comes with all of the things that you're trying to optimize and to change in one step that can then be quite overwhelming. Because then at the same time, maybe you're also trying to use infrastructure as code or a serverless framework to deploy your stuff. And then you're also trying to use a serverless database at the same time. And then how do you use that locally? And I think it's usually the complexity of trying to overdo with the optimizations and trying to do many things at once that creates this illusion. speaker-0 (47:16.846) Did you ever run into this yourself with trying to do some development with a third party repository or another service that had a ridiculous setup like that? speaker-1 (47:26.514) yeah, even my own stuff. Like I've definitely fallen into that trap. Yeah, it again probably comes down to passion because then we get passionate about it. We want to optimize many things. We want to do the best thing that we can and we could sometimes fall into that trap and I've fallen into it many times. speaker-0 (47:52.43) How did you pull yourself out? speaker-1 (47:55.054) Sometimes I didn't. I worked with what I had because it was already too late. And sometimes it makes sense to take this step back and just start, like we said before, start with the why. What am I trying to achieve? What problems am I trying to solve? And are actually some of the things that I'm now trying to add or change not contributing to this initial goal that I've set myself. speaker-0 (48:22.122) So with that, let's switch over to PIX. So, Laino, what did you bring for us today? speaker-1 (48:27.576) So I already told you that I have two picks and I would decide on the spot. So I'm going to do that and I'm going to decide for Home Assistant. don't know if you know it, it's a huge open source project. think one of the most active ones on GitHub. And it allows you to create home automations, but with everything being open source and open standards. And it's one of the projects that I have been really, really passionate about. lately to work with and to set everything up in my Unicurrents and Rainbows world at home. speaker-0 (49:01.096) I now I'm drawing the connection for something you said earlier about your spouse getting annoyed when your home lab is having an incident when you couple them together with Home Assistant. Good catch. That's my fear too. When you have the robots in control, you need that manual override that's so many times joked about in science fiction. speaker-1 (49:13.12) you speaker-0 (49:24.502) popular culture about needing to open the door when your power or your server is offline there. speaker-1 (49:32.418) Yeah, there's even this term in the home automation community called the spouse approval factor. It's quite an important part about any kind of automation that you do that you could do it just for fun, but still it's going to affect everybody who lives with you. And they're going to still need to able to live in your home without big downsides of your hobby. speaker-0 (49:57.708) Are you limited in what you're allowed to sort of edit or have control over? you know, it's like first you do the prototype and then get sign off and, you know, forgiveness later. speaker-1 (50:08.986) No, my wife is luckily very forgiving, but she expects me to fix stuff over time if I fuck up. speaker-0 (50:17.87) Do you have like internal SLAs? speaker-1 (50:21.686) Yeah, exactly. speaker-0 (50:25.646) I don't know if we should bring spouses into the episode. just you I already fear like if I was gonna do this like I would like there's how I feel about our production services that we have that we already promised like ridiculous numbers of SLAs and uptime and I feel like I'd be stressing even more that like I come back from vacation and something like the lights aren't gonna be working. speaker-1 (50:47.694) yeah, that's a horror scenario. speaker-0 (50:52.99) I don't know what would be worse, You know what, think I'm just, every time someone tries to get me to go down the home lab route, I just carefully be like, you know what, I'm good with the amount of stress in my life. speaker-1 (51:06.414) I think it's so I started out not with the intention of optimizing my life or doing anything good for us that will make living in our flat easier, but just to have fun. And I think that's the right approach. then over time, hopefully you will grow into creating useful stuff. But I think you shouldn't start out with that expectation towards yourself. speaker-0 (51:34.879) Yeah, don't do it with the whole goal of to replicate something you saw on the internet. Is there one aspect of the Home Assistant that you feel like you can no longer live without, like the most valuable thing that you've implemented? speaker-1 (51:47.054) So we have a go to sleep routine, which I quite love. it's so in the evening when I put my phone onto the charging pad, then it monitors the areas where we have our bathroom and bedroom. And as soon as there is no more movement, then it really turns kind of turns off the whole apartment. So it shuts down all the lights. It removes the power from our office area. It turns off the digital clock displays on the wall, turns off all the speakers and stuff. And that is just so convenient. And it also gives me this mental peace that it's not just me who sleeps now, it's my whole apartment and my whole life, kind of. speaker-0 (52:32.088) okay. it's also like environmental impact there, like reduction, energy usage. How do the digital clocks on the wall maintain their time for coming back after the wake up? speaker-1 (52:43.214) So the clocks don't turn off, it's just a display that... speaker-0 (52:46.299) I say, you've got like special clocks done. speaker-1 (52:49.23) Yeah, it's actually a very cool project called Autrix that you can find on GitHub, which is also open source. And it's just an additional display that is made up of big pixels, so it looks quite nerdy and you can basically display whatever you want. It also displays a welcome message whenever my wife or I come home, or it shows a message when we need to walk the dog and stuff like that. So really cool project to look at. speaker-0 (53:15.63) I like it. I still have this goal of like going on vacation and having all my plants water themselves, but I haven't found the... It's too much stress or motivation to get to that point to actually build what I need to make that actually happen. speaker-1 (53:30.318) Yeah, the plans are quite a big endeavor. I haven't gone there yet. speaker-0 (53:34.638) That's my number one. Okay, so my pick, I think I just was gonna be lame this week. I don't drink coffee at all. I don't know what it is. I just never love it. I have this open challenge for anyone who I'm in a room with to serve me or get me to try the best coffee they think is in the entire world. And so I am now a coffee aficionado who hates coffee. And that's just to know about me. I hate it, just don't like it at all. but I do drink a lot of tea. So you have to wonder what I have in these mugs every time. And this one I particularly like, it's called Himmel's Tau, which is like sky dew or heavens dew. I think it's a euphemism for something, but I think it's absolutely fantastic. I don't know what it is. There's just like so many flavors and there's just always seems like a great alternative to like juices or coffee or having anything heavier at the end of the day. I absolutely love it. speaker-1 (54:29.262) Nice. Yeah, I will have to try that. I'm usually quite basic and I just go for Christmas tea all around the year. So I'm happy to try something. speaker-0 (54:38.158) What is Christmas tea? speaker-1 (54:41.231) You know, with cinnamon and maybe orange flavored. speaker-0 (54:46.547) It's like the glue vine without the wine part. speaker-1 (54:51.988) Exactly. speaker-0 (54:55.286) So thank you, Lena, for joining us on this episode of Adventures in DevOps. speaker-1 (55:00.27) Thanks so much for having me. I had a lot of fun. speaker-0 (55:02.478) So thank you so much for joining us on this episode and thanks to all the listeners for tuning in for this week and hopefully everyone will be back again next.