Welcome everyone to another episode of Adventures and Devlops. Joining the in studio today, Warren Broad Warren, how. Are you thanks for having me back. You know, I actually have a good fact for today that I thought was really interesting to share. There's a malware out there called potter Cookie, and I know the economy or engineers is not so great at the moment. However, a lot of advertisements out there for a job recks, maybe from malicious attackers who are trying to get you to run GitHub repos or download packages from the internet to pass the interview, and those things will get installed on your machine and either try to steal your local crypto wallets or worse, be used for attacking whichever company you do get hired by. So I know it's a real struggle that you want to complete whatever take home assignment or you know, get the next job. But you really got to be careful in today's economy because this tech is out there that's just waiting to capitalize on a supple mistake. That's just nuts, Like let's just kick someone while they're down right, Yeah, Like all of that is nuts. I think getting homework from an interview is nuts. I think you know, potentially installing something on your computer that's going to make your computer go wild. It's not like it's just it's multiple levels of crazy. Speaking which, Hi, Jillian, welcome to the show. Thanks for having me back. You guys are making me feel guilty. Thanks for having me back. Like it's conditional with this point, just show up. Oh I don't know. I'm not sure that it is for me, but no, thank you. I'm still appreciative for being here. I guess that is what I'll say. Well, I'm happy to have you both here. You guys make my job a lot more fun and entertaining. And speaking of fun and entertainment, I'm looking forward to this episode. We have Alex Kernes joining this in a studio today, principal solution architect from dude, you just told me how to say this in my mind already went below. I've heard every every possible permutation of how to pronounce it myname correctly, which a lot of people don't do. Welcome to the show, man, I'm happy to have you here. Great to be here. Thank you. Cool. So give us a little bit about your background now that we know how to pronounce for your work. Yeah, great, I mean, I come from a software engineering background, if you kind of go way back to the post university career and then made the move into intercloud both kind of internal consultancy and of platform type work and then consultancy in terms of the traditional customer external facing consultancy. But I'm still very technically driven. I like my hands dirty. That's that for me. Is is what's the most exciting part. It's building things, breaking things, learning from it. Yeah, payper architecture is is not my not my phone. I hear you there. There's like a certain I think for people who succeed in this industry for a long time, there's a certain amount of entertainment value that you get from your job. I'm definitely doing it wrong then, I mean I I my new belief is that when I retire, I'm just gonna go back into drawing boxes and lines. Like that's just like the best part of my job. Like when I can get on a piece of paper or whiteboard and boxes and lines, you know, not even any words necessarily, Like that's prime enjoyment right there. I'm gonna get my typewriter and I'm gonna go to my cabin in Montana. Screw all you guys. So so speaking of just like I'm going to do something really simple and turn my brain off, I bought like a paint by Numbers kit and that has been. It just reminded me a lot of what you said, because I just sit there and I just paint in the numbers and I don't care that I'm nearly forty and doing an adult like an activity for children. It's great, and it's just so you just turn your brain. It's great. It's great. Anyways, I've actually heard quite a few people comment how like therapeutic and relaxing that is. They really are. They're very relaxing. It's great. I think so many things. It's the tech industry. Brains are so switched on all the time. There has to be a way to switch off otherwise work life balances it's pretty nil. Well, it's interesting you bring that up, because actually, very commonly where you keep being in a always production mode, like everything we do is happening at a critical level and we have to pass that test. Like whatever work we're doing, there's no practice involved. It's always run time for us. And there's a lot that there's a bunch of research out there that says like we have to go into practice mode where mistakes can be made, failures can be had, and we can actually learn from it intentionally, and without that we will like that's actually one of the biggest causes of burnout. So you know, if it's going home and doing watercolor, water painting, you know, whatever it takes there. If that somehow helps you recharge realistically, definitely do it. There's also a lot of evidence showing that taking on those kind of activities, while you're not consciously thinking about the problem, your subconscious is continuing to work on it, and that's when some of the big insights and big breakthroughs for you occur. Like I know, there's a really common antidote of like when you're in the shower and you have this great idea that's like a really fixed example that whole process in action. True, do two important jobs anymore, Like I'm just I'm just not doing them, but like I mean, like they matter, but not like not on like a huge I'm always in production. Everything has to be perfect, Like it's fine, it'll it'll be fine if it's it's not a couple of days later. I used to do important stuff though, and I don't want to anymore. So that's I suppose that's your lesson is this is where I'm at doing my paint. Fine numbers and things don't really matter. Well, one of the things we were talking about before we started recording the episode was leveraging jen AI and Alex you've got some experience with that, specifically some experience of like real world examples where you've done that, and I think that's one of the big I think that's one of the cool things about AI. You know, it goes through this this buzz cycle, but people who are actually putting it to real world use. I'm interested to hear your take on that. Yeah, I think it's a it's a really it's a really interesting topic. It's it's something where you go back eighteen months, maybe chat GPT was kind of just about starting to be established as almost household name. People aren't necessarily using it actively, but tools like that are becoming more and more common. And then I think with with any technology, it's it's when it gets kind of democratized, when it gets put in the hands of people that aren't having to spend millions on hardware and and do those kind of things that it actually really starts to become an awful lot more prevalent. So I think as we saw with any technology, So you think back to kind of mid twenty tens, I suppose, where things like AWS Lander came out, so kind of serveralus technologies, and then the years after that where every SaaS company that existed was going for a we now have a server off ring, and it's like, what is it serverless? Is it just managed service? Is it? What is your what is your definition of servers? Right? So you see the buzz around that. You see buzz around even Cloud, which is I mean cloud is public Cloud twenty years old if you if you go back to Aws's first service, so it's not it's not new, it's not shiny anymore. And AI, I think is going through the same thing, but just at a much much faster pace. So well's a really interesting comparison though, Like I just I want to stop here for a second there because I feel like there's a sort of a weird duality where serverleists made it easier for people to get into building stuff and releasing applications because it didn't require you to purchase or allocate huge data center capacity in order to make that happen. I feel like the where where the point is AI is currently at is it actually does require only the most expensive access to hardware or service providers to be able to get that. So I don't think like I don't know if it's been democratized yet. I mean, there's a lot of services out there that claim to get you access to some facet of AI, and I know there's like chtgbts and lms out there that questionable how much value they were turning to you. But I think that like the real core aspect of being able to provide the underlying resources or technology to people, I think it's still much too far away. I think the way you described it is great. It's giving people access to a facet of AI. I mean, if we think about AI as a general topic, artificial intelligence more broadly has been around for decades. It's only really that when you sort of stop breaking it further down into machine learning and deep learning and now generative AI is a kind of subset of that that the generative AI and the AI terminologies are now almost interchangeable. Certainly from an industry perspective, I think it very much depends on what what people are wanting to do with AI and how how specific their use case is. So if we think about things like like chat GPT, that's obviously a very specific use case. It gives you very generic responses to things. It hasn't got access to your your specific business data. But it's free and not with any any free product you are you are normally the product. Of course you can you can opt out of things, but by default it's it's collecting that chat history to improve the service for everyone. You've then got things like Amazon Bedrock, so Bedrock with aws's generative AI offering that came out at their conference twenty twenty three that was announced. So Bedrock offers kind of two different modalities I suppose in how you can use it. One is on demand where you pay per thousand tokens. So that's the way where you can you can go and build something. Again, it's similar to chat GPT in its sense of its generic knowledge. It's whatever the large language model has been trained on. But because as with any kind of cloud hosted managed service, they can take advantage of economies of scale and give you pay as you go pricing the moment that you want to fine tune that model or train a different model with your specific data. You're going from paying kind of fractions of a cent per thousand tokens to having to commit to thirty thousand for three months because you are now the one that's bearing the cost of all of that hosting rather than to AWS making it available. And of course there are ways that you can augment the use of large language models with your own data without going to that extent. So even just including examples of your specific data in a prompt or of retrieval augmented generation where you can load your own documents into a vector database and have it retrieve that data from there, there's lots of ways you can kind of get quite a long way without spending huge amounts of money. At the moment you want to get to the I have complete control of my model, I train it with my specific data, then you'd hope that that's when you start getting to the the type of customers who can afford to spend that kind of money. So, in your capacity at your current job, where you're interfacing with clients and whatnot, do you find there is one particular provider or one set of tools that you're constantly going to or whole breadth of set, but for different types of tasks to help assesdo yes. I think we are an AWS consultancy, so everything tends to center around Amazon tools. But of course as you or in Google both have have AI offerings. Now as well. Or the Microsoft with their investment into open ai, are the only provider that the offer the open AI models on the public cloud, and I think that will stay the same for for a number of years. In terms of the tools that we reach to, there's definitely a combination of kind of vendor specific but also open source tools. So in terms of hosting large language models, the kind of the most frictionless way to access them is through Amazon Bedrock, really really straightforward API, easy to write scripts to interact with that either synchronous asynchronous chat however you need to. But then you can start to bring in open source tools, so things like lang chain is a really popular open source framework where you can use Bedrock, you can use Microsoft's hosted models, Google open Ai. However you need to interact with your your large language models and then bring in those other parts like retrieve augmented generation where you can say I've got a database full of full of documents these are my business documents, my sales reports, my financial reports, whatever they need to be. And then when your large language model takes your prompt, it can then use that data that you've provided without having to specifically train the model to augment the generation of its response. And there's lots of open source tools that can do things like that. I think what will be really interesting as these kind of I want to say years, but I think it's going to be months with how things are going at the moment. As the next few months go by, so many open source packages are popping up, but it's how these open source packages stay around long term. So it's. Unless they are backed by by a big business what makes them commercially sustainable. I think we've seen seen frameworks like crew ai, which is a framework for building AI agents and multiple agents and kind of orchestrating like what agent would get called for a particular type of task. They've now introduced to a commercial model where they can take on some of the management and observability around those agents, or you can just use the framework open source. So I feel like I want to ask about that. Do you you're supporting your customers in utilizing AI within their businesses. Have you seen significant change? Say over I mean, I think things are changing very frequently in a month period, so you know, compared to you know, early twenty twenty three to now, Like what's the next thing, Like what are customers now most interested in utilizing? Is it one of the particular providers more so than others. Is it just a smattering of everything or do you really see something taking off specifically in the businesses that you're working with. So I think it's it's worth worth making it kind of provide a agnostic and thinking more use case and drivers for use of AI. So thinking back twelve months, even twenty four months, there was a lot I think there was a lot more of the people wanting to use AI for the sake of using AI. So we've had conversations with customers who have said, my board member has said, as a business, we've got to be using AI because investors wanted to hear, public needs to see it. Can you help as USI It's like, well, of course, but let's let's take that step back. Let's try and work out where there is a genuine use case. I think we've been kind of getting through. If we use the gardener hype cycle as a framework, I suppose here where I think we've we've definitely passed that peak of inflated expectations, that kind of top of the top of the hype hype cycle where everyone is using AI for the sake of using AI. People want to do way more with it than is really feasible and ethical, sustainable everything, And we now I think we're starting to quite rapidly get into that point of what the hype cycle terms as the trough of disillusionment, where people are thinking, I'm seeing AI so much every service, every tool, every news article. I mean even in the in the UK today our Prime Minister came out and announced a big plan for for rolling out AI and growth programs across the country. So it's it's gone from just the those big tech providers to to government politics to everything. It's dominates everywhere, and I think it's almost sort of starting to see a little bit of fatigue in companies where it's a it is a case of every vendor is talking to us about AI or their latest AI powered offering, and the that next stage, which I don't think we're far away from now, is working out and really being visible the use cases for AI that are here to stick around. So the one yeah, no, no, I totally get it. I mean it is quite in the it's everywhere, as you said, and I'm I sort of want to ask our resident mL expert here, you know, what she's seen, you know, comparative to with what you brought up. I know she loves to talk about it. I love AI. I think I think AI is very cool. I'm still really seeing people on the upward side of the hype cycle. Like I have a small AI service that I offer. I had to stop offering it like publicly because people were just coming in with these very outsized expectations. And now I'm like, okay, we have to we have to schedule like a ten to fifteen minute talk first so that I can like, you know, adjust some of these expectations and things. But besides that, I think it's very It's great if you're using it kind of for what it's good at, and then it's terrible if you're not. Like you know, I think probably a lot of the public policy stuff might be a little bit maybe a little bit like outsized in terms of what it can do. But maybe you know, but maybe people making these kind of requests and just being like, well, shouldn't it be doing this? That's probably what's going to drive innovation forward. So I have kind of I have kind of like mixed feelings about it. I guess, well, I think that's there's like I want to drill in on that for a second, like you're having the right expectations for it. Alex and Jillian and y'are both brought that up. What are some of the good use cases where you've seen AI really make a difference, Alex. So I can talk about one one kind of specific customer use case where as part of a migration, there was we had two hundred and fifty PHP crown jobs the rebunning on a non premises server. Now, these were some of these were. Been years old, kind of dating back to of PHP early PHP five and PHP four with some of them and some of them were they were little scripts. Some of them were fifty lines, some of them were four or five hundred lines. And then you have the ones that import from different files. And we kind of took the view of, actually, is there a way we can can do something here to speed up the analysis of these scripts. There are a few things that we need to know, we need to know what databases does this script talk to? Does it interact with any other services so APIs, does it interact with things like in the case of the on premises servers, things like the send mail binary on a server, because this particular customer for the rest of their business used a managed SMTP service now and not just sending emails from on premises servers. So we did a bit of a proof of concept around well, primarily just the way to speed up our own analysis, because yeah, I mean, nobody wants to read through. Line by line. No, say, it's not so. Well, I mean, if you're looking at just as an example, you know, you said accessing particular database, right, Like if you have calls out to some on prem or third party provided data provider and you're in they're migrating to AWS, they may be going to either a no SQL option or or RDS, and so you want to make sure that those get converted, assuming you just port the scripts, you know, lift and shift into easy tubes. So I mean a question that I might have is where do you find yourself on the risk scale, Like what happens if the LM which is not going to be perfect? I like, I mean, obviously it makes up tables and whatnot that aren't being actually used. It's that's fine, but I think the false negatives would be more of a problem, Like what happens if its the table? Would that have caused an issue during the migration And how did you potentially think about mitigating those sorts of risks. Yes, so what we did was rather than kind of invest the time upfront to build a fully working solution, it was let's do a test on a few scripts first. Let's let's try over three or four scripts, scripts that we have done the analysis by hand on so we know we've got a golden answer that there is a ground truth to talk we're comparing against. When we did run it across the full data set, then we did dip sampling to make sure that a reasonable portion of them were accurate. The other thing that we did with these was when migrating those scripts made you of different environments. So these were going to cumuneties cluster in as and because we had a dead of in a staging and environment, we knew that we could run these scripts in a sandbox pre production environment. If they failed kind of so be it. It's not going to bring the business down. It's not going to send customers emails because we can we can trap emails, we can make sure that any calls outside of a particular network are monitored and blocked, so we can quite easily see actually we haven't got to just cut over straight of production. I think the point around the risk and the trust that we often put in, or people are increasingly putting in large language models is really interesting because as you start to see it used in more in more regulated industries, there's an incredible amount of have power. I suppose that large language models are being given and when we get to that point of a model being able to be the only process and the only tool in a loop, I don't know, but I don't think we're there yet. Even when you think about more traditional machine learning use cases, like one of my favorites is that the kind of flaw detection in banking, where you make a purchase on a credit card, if it doesn't look like something that you would typically do, or it's in a country that you haven't been to before anomalous amount, then models should pick that up and say, yeah, it's not a not a transaction. We're going to allow to be processed because we think it's not you. But that's been honed and refined over probably decades. Our large language models given to be exhilerated quickly enough for us to be able to make use of that level of power and that level of trust. I mean that's a good point. I mean these scripts that you were migrating, I mean, worst case scenario is they just didn't run and they didn't necessarily impact user activity, and if they crash, you get the logs and then you can go investigate. So using an OLM in this area was inherently unrisky. It just helps speed up the initial analysis. But at the end of the day, it didn't really matter, right, you know, the whole amount of work. It's like, well, human could have made a mistake there too, and it wouldn't have had that big of an implication. But how are you starting to see customers utilizing AI and A well in like ones that should be risk averse but are leaning into it more than they should And how do you even evaluate that or how do you better avoid that? And I think you did sort of lean on doing sampling and verifying the outputs, but maybe there's something holistic because I feel like with the adoption of AI coming, more and more companies will do the wrong thing right engineers will either accidentally via negligence or just you know, laziness or whatever it is. You know, get in a state where like this is a great way to absolve myself of the challenge of doing all this work. How do we counteract that? So I think with customers we're quite upfront. So I think probably different consultancies, different different companies may have an alternative view. But speaking from a YAH employed official hat, if a customer is trying to do something that doesn't make sense and we don't think there's going to be kind of significant business value in it, then we will be aware, be open about that, and make the customer aware that actually what they're doing with it isn't likely to be successful. Obviously, if there are ethical or legal concerns around what they're doing, then kind of we're a technical consultancy. We can raise concerns, make customers aware, but ultimately due diligence is in on a customer side. Is coming though, right, Like I really would like to see some concrete accountability. I mean, we don't have anything quite the level of the Night Capital disaster where they lost like I want to say, like four hundred and sixty million dollars and that didn't involve AI at all. That was just pure automation and legacy systems causing a mistake. And now we definitely have automated car companies that I believe there was an incident where there was a death in Arizona or New Mexico a lot of years ago and the company didn't get suit or anything like that. So, I mean, does seem like accountability is something that's going to come up more and more And I don't really see anyone working on adequate safeguards here. I mean there's like, oh, we're afraid of AI, but it's we're not really talking about the companies that are utilizing it. I feel like, So, I think there's there's two parts. There's there's accountability and there's explainability as well. So, again thinking more about about traditional machine learning, some some algorithms are significantly more explainable than others. So there are a lot of algorithms and increasingly so as we get into the genuine to AI and in large language model space where there are a bit of a black box they have loaded to go in and it's very hard to explain why a particular result has come out. And again, models aren't deterministic, so you can do as much testing as you want, but you can never be truly one hundred percent certain. You can be very very confident that if anybody can give one hundred percent guarantee, then that it probably isn't machine learning. That's that's making the final call. But you can ask the model if it's correct. Rights, it's right, It's obviously right. Haven't you ever help the kid with their math homework before? Same thing? There's some interesting stuff that the AWS are working on, So for a while now they've had bedrock guardrails, which is there to try and prevent a large language model from responding about certain topics. But of course you have to give it a you have to give it a list to start with our topics to not talk about. And if you haven't thought of the topic, then yeah, again, you are you reliant upon a human to to have that extensive list before you can prevent the model. Another one that's come out very very recently, and it's the last within the last month or so, is AWS make use of automated reasoning in all places across across their cloud. So when it comes to kind of cryptographic operations and making sure that encryption is doing what it should be doing and not making sure there's mathematical proof behind some of these operations, that's something that yeah, today is being used across all of AWS, but only very recently has now come to Bedrock, and I think the feature is called Bedrock Automated Reasoning, where you can build rules up to say I want mathematical proof the response given is in accordance with these rules, which is quite cool and I'm yet to play with that, but it looks very promising. And AWS generally fairly good on the research and that side of things. They're certainly not perfect in terms of some of the decisions I think around releasing services that I maybe like just the wrong side of MVP we have seen a few times, but that is I think a side effect of being customer obsessed. In the. Perhaps as a a customer that needs a particular set of features and the easiest way is to build a service for it. But maybe it's not suitable for every use case and every customer just yet. But yeah, I think there's some really interesting stuff going on around automated reasoning around for aws's first party model, so the Amazon Nova family of models, they've got the AI service cards, which go into quite a lot of detail about how the models are built and are fairly transparent around the way they work. I think that's that's something that end users of model needs to be need to be more aware of. I think at the moment, it's very very easy to log onto to kind of or chat dot Com. I think is the domain that open ai spent quite a lot of money on and yeah, type something, But you don't know where that data has come from to build that response. You don't know the reasoning behind that response being given. I think as as these systems get more and more embedded in people's processes and workflows in business, it's needing to understand the why and the ask the question of well, ask the questions and have the ability to give a because this has happened because the model is doing this. I don't know. I worry that if we're relying on education to make people be able to utilize our AI and technology better, I feel like we're not going in the right direction. Like I say this because very frequently in the security domain, we have the same sort of mantra, whereas you are going to stop security events, prevent attackers, remove vulnerabilities through straight education. I mean, there's only so much that you can achieve there. And if your your security strategy is educate the users, I mean you might as well have given up already if that's your And I worry if that's the direction that we're going. Where. Oh, in order to use the models effectively, in order to understand what's going on, you have to be an expert still, which I feel like has been the case for a lot of years now, going back twenty thirty years in our mL development, it's always sort of been the case. And until we can break through that, I feel like, you know, real adoption really isn't going to amount to good things. And we're really close to utilizing it in more and more concerning situations where security is involved, or human safety is involved, or whatever Gillian is doing with automatic creation of you know, protein folding. You know, I honestly can't remember, but I'm curious that. The human in the loop like that. If we're doom spiraling, it's degree oh man, why is it echoing what happens? What did I do okay, you're from a few minutes here while I figure out what happened to my camera. Well, Gillian's saying is she is worried that she's but. I'm the greed. I could very well be the greed in this situation. It could happen. I think there's there's some really interesting and it's some really interesting ethical bits as well around it. So the topic of kind of self driving cars has come up a few times as we've been talking, and the things that as a as a human driving a car, you might instinctively make certain decisions. So if you've got a self driving car and there is a a certain tl a crash that one scenario means five people, one scenario means two people, but they are young people, then. It's like. What does the car choose? But at that point there's no there's no human emotion, there's no nothing in it. It's a someone has to program a model to say this life is worth more than this life and how do you do that? Right, So there was actually an interesting and so this is the trolley problem, right, and it's sort of the moral dilemma more than an ethical one. It was actually released, I think it was a quick study by Stanford where it like a gauged random sampling of people of which they would pick like where where should the car actually go to? And I think there was like at the end of the study there was like a clear hierarchy of what humans actually prefer. And I feel like it was something like you should kill cats first, and then old men, and then old women and then and then dogs or something like that. And I think it had to do with the fact that cats will just get out of the way, Like that was the expectation that these people will either get out of the way or you know, in worst case scenario, this is what they would pay. And it was really interesting that that they had done this, and there was of course some you know, not nice things said about the fact they had gone through with the study, But you know, I think humans will sort of adapt to preferential picks for what they are okay with. The thing is. Industry is it's going to be optimal. I often see people for saying is going to have as much on impact, that's que and it is having that much of an impact. Then with cloud that there weren't really that many drawbacks that I can think of. There there was there was concerns, there were there uncertainty around who owns my data, who controls my data? Is my data secure if it's in Amazon's data center versus a server in my own cupboard. But with generative AI, there's there's almost that kind of rabbit in the headlights approach that a lot of people are taking at the moment where I think there is a real, a real danger without as we talked about, without the control, without either education or forced guard rails to be able to use this effectively. I mean I remember seeing seeing an article I think it was tail end of last year, and it was being talked about in the Yes and there were talks of developed models being legally required to report on things like whether their models could be used for purposes that would have a national security implication, which I think is absolutely absolutely right from a again from a moral and security perspective. But there's always that kind of fine balance of any emerging technology, how far do you regulate it? If you regulate it too much, does it stifle or prevent innovation? But then the flip side is if something terrible happens because somebody has used a large language model to teach themselves how to carry out an attack. Then the argument is where there wasn't enough regulation. I think, going back to a self driving car example, the solution there is just go into the settings of the car and you get a little order of preference, like I'm okay with hitting this, I'm not okay with hitting this, and then problem solved. Right, It's it's so tough, it's. I mean, we choke, But like this is what we unfortunately have to do is humans sometimes, Like that's how all of healthcare works, right, Like when there's a shortage like during COVID there was the shortage of the ventilators. You think the doctors didn't have to like make decisions Like it's very unpleasant and nobody wants to think about them or talk about them. But the reality is we do anyways, and it has to happen. And I'd imagine that it also has to be a part of like self driving cars, because cars are terrible, terrible death machines. I hate driving. If I mentioned the show yet, how much I hate driving and having to have a car. I think I think it's because we're in this intermediary state, because once we get to the point where there are there's automation all around us, and we have adapted to that fact. It's no longer as big of a problem because it's whose fault is that if you step in front of a train. It's like, oh, well, the train was supposed to stop. It was you know, supposed to know. And some of them do have safety protections in place, but realistically, you don't go on the tracks when the train is coming unless you have some reason you really want to be there. And I think realistically the same thing will happen if we're in the automated car space where the AI self autonomous cars are driving around and realistically, you know, don't step into the street. I mean, why would you go there? And I think with the the AI cars, we will need want to really redesign egress and flow for traffic and we will be able to do that effectively once everything has been automated. I think that's there's a really a really interesting change coming in terms of I guess similar to what we we would have seen with are okay and kind of precesses being automated is anyway they are going to have a bey are going to step change improot in efficiency. There is there going to be a resistance from within thenesses to work on these generative because they think it's going to put themselves. Out of the job. I think it's it's a it's. A really challenging space to try and try and tread the line of improving efficiency, making redundancies. I think going back many, many years and you think about things like the Industrial Revolution, and inevitably change is mean that some jobs are no longer needed. But those people that they find different roles. And if this kind of continue at the pace that they're going and disrupts industry at the pace it's predicted to, then people need to kind of change with it because yeah, very quickly you are you're already seeing not adverse to have experience in AI as essential. It's no longer a bonus and it's a you must be able to work with copy tools and effectively know how to say AI and do evopts or migrations or anything like that, because come these expecting then then expected. It's a really interesting change to see how the job landscape has changed a sort of the last twelve months of the Internet action of AI. You know, it felt like twelve months ago. Using Copilot and tools like that was seen as cheating, but now it's just seen as like a part of the job, and I'm interested to hear how it's being like, how are how are people testing or qualifying your your skills for that in the employment space. I think I can see you to use of tools. I want to use them. They're not fine and with it comes with the changes. If something to do and. That's musting field founding, then I don't think AI come into because if you're using Jude but don't understand exted, you the terms that buildings square or booking solutions that are going work, but there's also a pretty good chance that they're not, or you've left a security hold in it, or it's going to cost ten times as much because there's there's one best practice that you've you've not realized because these models were trained on open code. So from from my perspective, I mean I I've tried a few different co pilot tools to get how co pilot came came kind of fairly early doors given Amazon q Ago as well. At the moment, I'm trying out the windsurf editor and Cursor as a sort of more integrated i D experience, and both of them are fairly good. There's some really cool stuff being able to like if you want to just hack about on a project. I do quite a lot with the stream lit Python package, which is a great way to build or data and AI apps with an acceptable user interface without having to know how to write good front end code and being able to just say, like creating a project using stream Lit. It needs to be able to interact with Amazon Bedrock like stub out the methods for me and get something very quickly. It is good for that. I think the only way that these tools will excel will really be with true understanding and context of on what would be in a human brain. It's the getting into the flow state of I understand this whole repository, I see how different pieces kind of connect together. But also if you've got micro service architectures and actually the context you need is in a different repository, then it also kind of needs that as well. So of course there are there are limitations. There are. There's only so far these things will go. But from I think, from my stance, if somebody kind of turned up at an interview and they wanted to use an AI tool as part of a tech exercise, for example, then it wouldn't necessarily be a it would be hypocritical of me right to say this is a thing. But I think I'd be more I'd certainly be more aware and more yeah, be a bit more careful about asking the right questions about the code that has been generated, because. I think that's sort of like one of the really important parts here is that most companies don't spend enough time evaluating their interview process for what the right questions are, and now they're starting to realize that AI is getting in the way of them quote unquote effectively evaluating the candidates. And I think that really goes to the fact that the question didn't make sense and their evaluation strategy didn't make sense, and that there are tools that can easily solve that, and if they can solve it during an interview or during a take on interview test, then they could potentially or likely using that tool during their job. And I think we're already starting to see some companies intentionally telling candidates to use AI LM specifically to solve the problem because it is something that other engineers on their team are utilizing and that they would expect someone who comes into the team to also understand and how to utilize those tools, because things like configuration or linting, etc. Will be changed fundamentally by AI. And through that way, if someone who comes into your team that you're hiring into it doesn't have experience utilizing LMS to help them sufficiently or handling their weaknesses whatever they are, then they're going to not be as an effective team member when the rest of the team is expecting someone who is able to do that. I mean absolutely, and I think with with how much of the industry and how much of businesses genuinely there has the potential and promise to reach there's almost prerequisites that a lot of people are aren't really thinking about at the moment because they get they're going to get blindsided by the AI is great. AI is going to solve all the problems, whether in reality there are there are certain things that are foundational who successful use of AI, like okay, AI is software. How do you monitor your software? How do you make sure the responses from your AI powered applications are what you expect them to. If you were deploying an API, you'd spend time in thinking about okay, what are the useful metrics. Is CPU a useful metric? Or is latency a useful metrics? What are the things that actually have an impact on the end use of this and AI should be no different. You should have that kind of production wrapper, you should have monitoring, you should have security concerns that you're proactively protecting against. But then also it's that it's that foundation of data. It's the if you're an organization that has lots of data and lots of different places and you want to use it in AI, you need to have a good data platform. And you have conversations with people around productionizing some of the AI kind of proof of concepts or very early stage experiments they've done, and you will say, okay, well you've you've managed to get these little samples of data from various data stores to prove of this as a as a concept could work and is worth putting into production at a wider scale. So one of the problems is that with the pricing with a lot of the models today, if you're not running it yourself, you're pretty much paying for input and output tokens the amount of context you're adding, which you need a very high context to get an adequate answer. Event, and then for whatever reason, these companies are charging you for garbage nonsense coming out of the models. Their decoding process to get back at readable answer has a lot of nonsense in it, and then you're paying for that. So I think the industry is being driven towards trying to optimize for these two things, which have nothing to do with the quality of the answer in the first place. And I know we'll ask the question about, you know, bringing AI into the interview process, and I feel like, you know, will You're now in a great position for me to ask you, do you feel like your interview process has been changing to respond to the increased usage both in the workplace as well as candidates using potentially using AI during the interview process itself. For me, no, because my interview process is probably a lot more old school than most people. If I'm interviewing a candidate, we're going to have a straight up bullshit session because I work largely around infrastructure and just through a casual conversation, I feel like I can get a lot better feel of whether you know what you're talking about, or whether you have heard the terms but don't really understand what they mean. So very a very small amount of my interview process has anything to do with like hands on the keyboard, technical coding. Do you have some part of it that's like any sort of technical validation or technical systems design or anything like that which could be impacted by AI at all? Potentially, Yeah, because we'll do like an exercise of you know, throw together a couple of micro services and explain to me the interaction between them. But then I'll spend a lot of time just talking about that, like, well, how does this part work, how does that part work? Tell me what happens if this does that? And I dig into a lot more of the operational stuff. I think, and if a candidate could pregame all of that and use AI, good for them. I just think it's unlikely given the dynamic nature of my interview process, So. I don't want to spoil it. But there is a product out there where in a remote interview, the candidate will run it and it will listen to the audio that's coming across and watch the chat and then dynamically generate text response for the candidate to answer on the call. Now, I do think that there is a challenge here of being able to adequately understand what words are important in a sentence. If you have a thought and you're sharing that thought, you know what the point of that thought is and which nouns are more important than others. But if you're reading a response from something else, you might as well all say it all monotone because there isn't any part of that that it makes sense upfront to you. You almost need to read it first and then answer back. But that exists. So unless you're bringing people into the office, and obviously we want to optimize for more remote working environments. You know, our our company, UH is one hundred percent remote. I know a part of yours is will I don't want to sweare to that, but you do have different. Yeah, we're one hundred percent remote as well. Yeah, So I mean there's only so much you can do there. I mean, you're not gonna meet in person every single candidate at a coffee shop or something and go through uh sort of validation that they're not doing that. I mean, you have to use your other skills to sort of figure out whether or not there are you know, they believe what they're saying. Yeah, and I think I would in that in that scenario, I just rely on like our our sixty day window, like every candidate would bring on has like a sixty day trial period and if if the expectations didn't line up, you know, we have sixty days to resolve that. And if not, cut ties. Are you at least for us, We're really transparent about that. Like, if you're cheating during the interview process, that hurts you because we're just going to fire you a couple of cup date months later. Is that what you want? I mean, you're risking it by coming and joining us, just like we're risking it on you, and we have people have the capability of ending that relationship. So you know, if you managed to get through our interview process faking every step of the way, and then you also managed to fake the next couple of years successfully, I mean I actually think that was a pretty good hire. Yeah, agreed. Like, if if you're faking it, because this is really where you want to be and I pick up on that, I'll do everything I can to help you get there because that's how I got here. I lied through my teeth on job interviews. Oh I got here too. I would just like, hey, I had a baby, that baby needed some food and I was like, all right, I don't know what we're talking about, but I have Google. Let's go figure this out. Yeah, I mean I read all five hundred pages of the Microsoft SEQL Server six book because that was the job I wanted. This is where we can almost flip it on his head a little bit, because hey, I can post interview then play advantageous because if your interview process is cultural and it is assessing primarily person fit to a business and their ability with how they learn, how they interact other people, then if you can make a great hire that has the ability to very quickly learn land on their feet, is a great team player, then AI is a great tool to be able to assist in their upskilling or things they don't know technically once they have got in the door. So it's there's two sides to I suppose. Ah, that's a really cool idea. I hadn't thought about that before. So instead of yeah, so you just kind of flip it around and say, hey, AI, here's this dude, where should I be helping them? I think the part of the trouble with that is you would need to have only verbal the most part interaction with the candidate during the interview and have to go through the process of like, Okay, you know, we want to record this session so that you know, later we can feed it back through and make it available. And I know that you know, there's no reason why this should be a problem, but you know, every single additional obstacle you add, there is another risk for potentially losing out on the candidate. I feel like, you know, hey, can we record this session and have this as recordings so we can share with the rest of the team. It's just another one of those things. So I mean, if you're getting the value out, you know, great. I think that's where there's a I would love to see the tool that actually helps there. I'm thinking of it from a from an individual. If you are hired for a role and you're hiring someone that is like ninety percent of the way that but they are one hundred and twenty percent in terms of their ability to learn, you know that they will know the stuff given the opportunity to learn it, because they've demonstrated they've they've picked up on these technologies in every job they've changed, They've come in relatively fresh. That's where they as an individual are able to potentially use AI to augment their learning, whether it is through co pilot type tools, whether it is chat, GPT, those kind of things, where previously you would reach for stuck overflow, which, again looking back on it, you think, well, yeah, stuck overflows great because you would get loads of answers to things, but you would still have to understand the answer because you don't know who that person is that's written that answer, much like you have no guarantee of what the model is is outputting from its response, you can rely upon, well, you can take indication from the amount of kind of crowdsourced endorsement I suppose of a particular answer, And that's I guess where responsibility maybe goes more onto model providers to actually say are the answers you are providing accurate? Maybe large language models are just too too general for something. Maybe this is where the the more niche models that are specifically focused on Python coding, for example, are better fit because they have been trained on vetted best practice Python code. Yeah, so many angles that you can explore with this. I feel like we could talk for hours. One of the things I'm doing, we're doing our annual performance reviews, and it consists of each employee gets a pure review, they do a self review, and then I write their review one of the things I've been doing. It's taking a huge amount of time, but I feel like it's still worth it is I'm giving AI the pure reviews and their self review and my review, along with the responsibilities for their current level and the next level, and then asking AI, how can I help this person better meet the expectations of their current role and meet and and start growing towards their next level within the company. And it's it's been insightful because it's picking up on things that I'd overlooked when reading through the reviews myself. I think that's a great, great use case. But the key bit in that is it's grinded by human effort that somebody has put into start with. It's grounded by truth and actual knowledge. If you took I mean, if you took the the part where you write your review of that person away and said, Okay, well, they've written a self review, they've had a peer review from somebody else. Now let's use AI to write the employer review. Make it this tone of voice, here's the metrics about this person. How many lines of code they've written, this year, those kinds of things where you could quite easily use AI for that. But that's where it then turns into the this is a little bit. Too far, Yeah, for sure, because at that point I personally, I would feel like, well, I'm not really adding any value here. Their review came from AI. At that point, I feel like I've still got to put some skin in the game and do my job to help them. Well, I think with that said, like a lot of industries are going to be creating verification processes that are specific to the problem. So this whole idea that the AI is going to be running amuck, it's like, well, no, we don't really do that. That's not like how things in the real world works. So, for example, in biotech, I think there's going to be a ton of AI generated drugs, but they still have to go through the same verification process as all the other drugs, which take years. You have to be able to create it, Like just just being able to create the thing, just because the computer says that it's a valid like it's a valid drug and all that doesn't doesn't mean that it is. It still has to go through clinical trials and still has to go through peer review, and I feel like every industry is going to have some it has something similar, right, Like I don't know, so I don't I don't worry about that one quite as much, except I worry a lot about greed and the loop, Like that's what that's the one. That's the one that I worry about. Like, oh, look now, now we can make all these biosimilars to you know, this drug but this patent expired for and we can just be pumping these out and then, like you know, if there's not enough like regulation and things, or if somebody can to get pushed through any of these any of these kind of verification steps, then I could see that going very, very sideways. So I'm just gonna hope that doesn't happen, and if it does, I'm going to move off to the woods and there's going to be no more computers in my life. And that's going to be I think you hit on something that's quite ingenious here, actually, Jillian. If you just go through previous patents for drugs and then you ask an LLM to generate a new drug that has the same bonding you know, activation sites, the same you know, interactions with other molecules, but is fundamentally different enough that it could be classified as a new drug that could be patented. Then these companies will start losing a lot of money because they won't their patents won't mean a lot anymore, and will have a lot cheaper medication in the world. That's the hope. That's not actually how things have been going. I mean, like, I really, I really appreciate your optimism there, but I'm not I don't know, I'm not sure. Like if you look at like biologics, right, like biologics are probably I think are like one of the biggest medical innovations you know, in like decades, and they're so expensive. They cost like I don't know, I think Humerica costs like a couple grand a month without insurance or something like that. So then yeah, hopefully we do get this next wave where we're creating all the biosimilar drugs and so on and so forth. But I mean, when when the biologics first came out, they had their patents and they had an absolute lock on the market. Legally speaking, you could not create a drug that was you know, slightly similar. They're called biosimilar as you couldn't do that because there's legal red tape and stuff. But yeah, I hope, So that would be great if you know, if like just producing these drugs got cheap enough that the patents were no longer even worth it, then that would be like a pretty huge disruptor to the medical industry. I mean, they could make it much worse because the company that produced the drug initially, if they had used LLLMS anywhere in their process, technically they can't patent it in the first place. So I think we're very close to the point where, uh, there will not be any patentable artifacts that exist in the fundamental laws are fundamentally changed. I think we're already there though, like with the and patents are still around. So I think it's less about the the computery stuff, you know. So like I work with a lot of companies and they're like, oh, can I patent this process, like the software process, And it's like, well, no, you shouldn't even really bother with that. Go patent the process that you use in the lab to actually create the drugs, And so that's where everybody's at. The actual like data generation is is like a throwaway kind of yeah, you know, like yeah, it's just it's just throw away because a lot of that has to be open anyways, like the data generation that you used to actually get to your drug, because it has to it has to be like peer reviewed and all that kind of thing. But everything that goes on in the lab can still be a lot more I don't know, I can still have a patent, see. And this is why greed in the loop is such a problem because then there's always like there's always ways around the things, and then people want to be making money, which I get. I feel like that's that's its own episode in itself. I think greed and software and tech companies and yeah. I could a little bit depressing though it's not like a fun topic. Okay. So Jillian's like, I have tons of optimistic topics that we should talk about. Let's pick one of those, especially regarding any sort of AI or mL Okay, I'm all for it. Yeah, let's just talk about the cool stuff and not talk about, you know, potentially people flooding the market with crazy patents and then nobody getting their drugs is how that works. So, speaking of cool topics, Alex, you work with a lot of companies implementing AI into their business processes. A lot of our listeners are in the devlops field or deal with software engineering and infrastructure. What are the key pieces of AD you would have for them to continue their career and be ready for the next evolution. That's a great question. I think it's about embracing it, but also being super critical of the solutions and the tools are available. So it's very, very easy to feel overwhelmed. I think by the number of AI solutions that are. Available in any space. I think in DevOps, particularly if we're including kind of the developer side of that tools in there, we're just going to see more and more come out. I mean, there's. A company I came across who their niche is co pilot tools, but they only offer models trained on your company's data, so they haven't got like a public offering. They're aiming at an enterprise. So the idea is they they trade a model based on your organization's code bases, and that is your private copilot. So I think there's there's lots and lots to come in this area operationally. I think the whole I guess principle of DevOps is trying to break down that a sort of metaphorical wall between the two sides and empowered developers to the operational tasks. I think we're seeing quite a lot come out around explaining operational events using generity they are and that kind of almost like trace analysis of well, this happened. We've got ten different data points here, and how do we correlate them, How do we say this happened because this happened, and this happened, this happened, and the chain reaction was this. Being able to. Even as simple as put those things together in a some sort of structured data and. Then using a large language model. To summarize it into a black message to say this has failed. This is why I think the one the one piece of advice I'd give people if they're looking to to start experimenting with AI, is like, so solve your own problems, find things in your processes, in your workflows that take the most time, and use AI almost as like like your shadow. I suppose it would be a good way to describe it. So if you haven't got confidence in it straightway, tell it to the same things that you would do, but build it so that it does it as a dry run. Make sure it is going to execute the same steps right on. Yeah, right, very cool, And that feels like a good segue into some picks. Do you think some picks so. I would say, I would say, do your picks have to be physical or can there be be software? Anything? Anything goes, anything goes okay. So I'm going to go for some that are some that are related, some of that are are not. So my my first one is AWS have a free to public, generally said I experimentation website called party Rock. This was born from a it's actually an AWS engineer that builds it internally as a way to experiment with large language models, and then it got adopted by AWS as an organization. So if you go to I think you are at a party rock dot AWS. There's no credit cards to anything required. Go on. You get a free amount of usage each month, and you can build these these generative I apps. One caveat is it's free. It's public. Going go and upload like your company's financial records to it, all personal data. It's if you're going to use that kind of data, just anonymize it first. Then what else? I I'm going to go four. Something. A lot of developers spend probably some money on, but I don't think you can spend enough money on it. Which is keyboard and mouse. I put myself out with a comfortable mouse and a comfortable keyboard, the one that comes with your Mac or comes with your PC. It's functional, right, but after a while. It's going to hurt your wrists. So what do you got? My little Logitech MX Master three. It has far too many buttons on it to know what to do with, but it's comfortable. And then I have a key chron K two mechanical wireless keyboard, really slim from mechanical keyboard, uh, but super nice type on as well. And you said socks were cool for that was my That was my prep for this was socks. So I have some really cool socks, but they've all come from conferences. I can't give people links, so maybe like. Who which which vendor it gives out the best sucks? Right? There were some I got from uh Influx TB last or year before last, which were they were really cool. They were like every color under the sun, super stripey but really really comfortable socks. Or the like how degrail of conference swag, which is the red hat red hat which you might be able to see like up there on top of the bookcase. Yeah, like swag is a yeah, a bit. Of a a debatable topic, but you can normally go to a conference with significantly less close than you need. On day one for sure. All right, Gillian, what'd you bring for picks? I actually have a tech pic this week. I was looking for some type of UI to build out my terraform code, mostly because of this AI product that I was talking about where I deploy it like on the client site and it has to have you know, like that database and S three bucket, a couple of lamb of functions and then e C two instances, And I was like, wouldn't it be nice if there was just like a parameter I is quy then I could just go and type and click a couple buttons because I'm really in my I don't want to be typing era of my life. Uh. And I found resourcely and it is very very cool. I would like to point out there's no way that I can afford their like their plan that's actually very useful. So this is part pick and part me ebagging. You know, if the guys at resourcely, I could be the voice of your tool on the podcast and like, I'm sure that would just be amazing, So there you go. But it is. It's really neat, and I like that the back end is all just run by Terraform and cookie Cutter because those are like my two those are just my two favorite tools of all time. It's like half my life is run with Terraform, cookie Cutter and make files, and then when you're throwing the make files, it's like ninety percent of my life. So we definitely have a like a full sponsor section on Adventures and DevOps dot com where if if someone wants to be a sponsor this podcast, they can go there and read what we have and then decide if it's for them. Uh, you know, Jillian, what I found that works is ask your customer if you know how to incidentals work, and whether or not the usage of third party tools to help cut down the amount of time that you would have to charge them for would be included under their contract. A lot of times the contracts that when I was doing consulting, you would include those in there, and of course you would charge that to the customer at to optimize what they're actually getting out of the value that you're providing. So that's always really tricky for me, just because like the companies that I work for, they're not creating technology as their product. Like if they could get rid of me, they would, okay, like if they could just be like you just go away, we just want to work on our laptops with Excel, like they absolutely would, So that one, that one is always a little bit of a tough sell for me instead of just start emailing people and try to get stuff for free, which is probably questionably like in terms of ethics. But I get emailed all the time people asking stuff for free, So I mean, I don't think you're doing any thing especially wrong there. All right, well, thank you for thank you for the ethical vote anyways, think you appreciate that. Sometimes I am a little bit like, hmm, maybe I'm a little bit too much on the side of the ebagging, but I do like money, so here we are. But it is anyways, it is a really great tool. It actually does generate you like this really nice UI you can do. It has to sort of parameterize like in multi tenancy built in that I really like because I find a lot of tools they just I don't know, they just they just don't have that, and that tends to immediately not work for me because I'm so rarely working on my own AWS account right Like, my WS account is as bare bones as it can possibly be, just for you know, DEVA for whatever it is that I'm working on, and then everything else is deployed on client sites. So it did. It did genuinely look like a really nice tool that has like everything that I want. And I think that I can even make the free plan like mostly work for me. But we'll see. Right on, Barren, what'd you bring for a pick? Yeah, So I've got something really interesting. It's actually a old paper research paper from Yale in twenty ten, and the name of the paper is comparing genomes to computer operating systems in terms of the topology and evolution of the regulatory control networks or regulatory control networks. For short, it compares the Linux operating system to E. Coli bacterium. And I find this really interesting from an architecture standpoint of how what we build in technology is so wrong. If we look at the evolution of biology for millions of years, you look at the evolution of E. Coli and you see what's currently there, and it's only a six page paper. It's very short, and it really gives a lot of insight into the sorts of things we're building and whether or not we're building them effectively, and being in the infrastructure systems design space having new insights for how to build things or what actually is really important. I always find really interesting. Is that from Wade Schultz. I don't think so okay, but I could be totally wrong, and so I don't want to swear to it, and I will have to confirm for you after the episode is over. Right on, because Wade's a really good friend of mine, and he's the head of computational health over at Yale, and that sounds exactly like something he would offer. Right on. So my pick for the week, I'm picking a book this week, a sci fi fiction book called Jurish Juris ex Machina I think that's how it's pronounced. It's from John W. Maylee. It's a really cool book that has a lot of tie into the episode we've talked about here today. It's a future Earth where the legal system has been largely replaced by AI and the main hero of the story has been wrongly convicted goes to prison, but it's a really well written book. It's got a lot of super cool, nerdy tie ins into it, and the writing is well done. It's fast paced, so you get sucked into it immediately. And on top of that, in about a month we're going to have John on the show to talk about the book and AI. So I'm looking forward to that episode. And so that's my pick. For the week. Awesome. I'm gonna have to read this in preparation. Yeah, it's it's been a really cool book. Like I struggle to get into too fiction books, but this one just like slurped me on in right on. Alex, thank you so much for joining us on the episode. Was a pleasure talking with you. Thank you for having me. It's been good fun. Right on, and to all our listeners, thank you for listening. Appreciate your support. Gillian Warren, thank you for joining me co hosting with me, and we'll see everyone next week. S