And here we go another episode of Adventures in Dead Ops. Just in case you had this cup and you listening to your podcast player and a new episode popped up. This is Adventures and Dead Oops. I'm the host, Will Button joining me as always, Warren Parade, CTO of offers what's going on? Warren, Hey, thanks for having me back. Well like usual, I've really been struggling with a number of MDMs solutions out there. That are now just choosing this moment to have a problem. I mean, we have. Crowded before, and I think with Singapore Mobile Guardian just felt the need to make themselves known to the whole world as well by blocking out all their customer systems. So, you know, I think it really says something about mdm's maybe not the best solution for companies to be figuring out how to avoidicular security problems. I think it's Hollywood marketing theory. There's no publicity or no what's what's the phrase, I'm gonna butcher it now. It's no bad pr Yeah, that's the. One I was looking for. Thank you, because now everyone knows who those companies are, and so marketing is just having a blast with it. But today we are going to be talking about APIs because well, I'll save it. There's a lot to talk about there. But joining us in that conversation we have the co founder of Lunar dot Dev, which has been in the press a lot lately. I've been hearing y'all's name pop up a lot, and so I'm excited to have you on here joining us. Al Solomon, how are you. Man, I'm good, Thank you. Great to be here. Guys. I'm excited because APIs, you know, working from the infrastructure side, I'm never I'm never in anything other than awe at the number of external APIs. Our engineering teams managed to find somewhere on the internet creating account, get it in production, and then we never know about it until like we hit the free tier limit and then they're like, well, yeah, we need a paid account. It's like, wow, is that not something that we could have chatted about before production was down. Completely. I think that that's about like summarizing what the API economy is. I mean, sons of APIs, you get to go just connect, don't worry about it. I mean we'll meet later on in production and we'll see how things are doing. But this is like the essence of the API economy out there. That's an interesting perspective really, Like I'm not sure how much I've seen that, uh, but it really makes me think of that meme of a Chappelle with the math, whereas like I think it was cold, it was like, you know, you got any more of the you know free tr you know, really messing out there. For sure. Well we're going to dig into that a lot more. But first, give us a little bit about your background and what led you to co founding Lunar dot dev. Like what what decisions did you make in life it made you think that this is where you needed to be. Wow, that's that's a big question. This is me procrastinating and on all of my big, biggest decisions so far. That's a heavy one. But no, sol, I'm the co founder of Lunar Dev. I've founded Lunar with my close friends for the past thirty years, which is roy is the city of the company. We've been around since first rate. Actually the both of us are engineers and cybersecurity engineers in particular. This is like our background, low level understanding things, which is not exactly what Lunar does. But taking that mindset of cybersecurity, and you know, understanding the tool set that we got, we thought that there's like a lot of problems to solve when it comes to optimizing. At the beginning, you're thinking that, I mean, naturally, you're shifting towards optimize an infrastructure like the infrastructure that you run upon the cloud infrastructure. There's some cool startups in that direction. But I think that the signals that we picked up along the way is that listen, infrastructure, cloud infrastructure. That's cool, but me, as like a heavy consumer or third party APIs, this is what bugs me at the end of the day. This is what I'm trying to optimize on a monthly basis. Uh, this is like where my building sky rockets, and like hearing that sound and talking with other companies. I think that along the way we've connected the dots the dots and understand that, like the natural progression is that if companies move from on prem to the cloud and then they consume it in the in abundance, all that resource and that that resource needs to be managed and optimized. This is stage one, but as companies progress to be much more consumers of of APIs, like the subs perspective, of things that calls for another management and optimization here on top of it. So it kind of like led us like over month to the understanding. It wasn't like a one a ham moment that that like foreg our minds, but it was talking with different companies understanding that even though they're thinking that it's a different problem for each and one of them, it all boils down to the same problems, to the same type of middleware solutions. And then like the I think the inception of Lunar came to be that Okay, we can be that platform that can build that management and active control on top of your eCos, traffic, your the third pardies it as you're consuming. So that's like the story in a nutshell. Obviously we've done ap integrations, we've been optimizing them on our own, uh previously, but when we did it back then, we did the connect to dots. Only when talking with a lot of companies, we understood that, Okay, there's like a coming denominator among all of those companies. Yeah for sure. So like from my perspective, there's there's this array of APIs that's your application consumes. Each one of those has a separate building agreement, a separate tier, there's a PI keys that have to be managed for that. There's uh rate limiting. Each one, you know, may have different ways of showing that you're you're being rate limited, or how they return errors. You know, we won't even go down on the path of the APIs that return an HTP two hundred with a Jason blob that says error inside of it to let you know that there's an error. But like, those are all of the different things that you have to deal with when consuming the APIs for your application, and so it can be a lot of work. Like that's all work that is necessary for your application, but it's not really considered as part of building an application. I think a lot of us don't think about that. You know. We think about, oh, we've got to write our functions, we've got to build our Docker images, we've got to build the UI, and where should we put the button on the screen, you know, and we tend to overlook that until we have a production outage because of one of these API issues. He's really got some like past drama here. Right, this is we can just we can build this episode to my insurance company is therapy? I mean what I will really like even the two hundred with the error code and the body like, I'll take that over an API which does something either monetary related, physical related, or timeliness related and doesn't offer itempotency. Yeah, it's just and there's those there are those that are out there, and it's sort of ridiculous. You know. Can you go and talk to these companies and make them, you know, offer item potency, you know, automatic retries that don't double bill or don't send messages or emails to a user twice, Because I'd really like that. So you would imagine, right, I mean, if we can only talk with the entire API economy out there and and make sure that they're aligned in the same way so that companies will consume them better, which is one option that you can do. But the other option is saying, listen, this is a consumption problem. So let's put focus on the consumer another provider. Let's give active controls to the consumer and make the unified consumption layer or mediation there on top of it. I think, will you said it correctly? I mean most of the focus so far when it comes to third party APIs is around integrating with an API. We gotut tonnels of them. Either you're doing natively and you got or you doing with an iPads, you're doing with the marketplace. You got a lot of those solutions in place. They're been around for years, but there's not that much focus, if any, on the post integration side. Post integration is what happens when it's already running in production. Now there's a diff there's a difference between when you integrated with an API, with that API and now you're running production because in production, first of all, scale obviously, like immense scale that you couldn't test in staging, things tend to break and tends to break over time because scale get bigger over time, outages and the actual way that the API provider is behaving, you know, breaking changes, stuff like that. So that's something that you maybe maybe didn't encounter during development stage, but in production it's bound to happen. And I think there's less also less visibility and real time monitoring on the way that those APIs are interacting. You're interacting with those APIs, so there's a lot of unknowns and not a lot of focus on what it comes to do with managing those APIs in production. And this is where things begin to break. Companies begin to as as you described as you well describe it, I mean along the way when it breaks, you hear about it, and then obviously it takes time to understand does it something that has to do on the provider side or on my back end, So a lot of investigation over time, troubleshooting. It is a problem, and it's a problem that companies are like, you know, bearing with them along the way, and so it's mostly on their own trying to build those things on their own as they progress and as they grow. So definitely a problem. Yeah, I mean I totally get it though, because even with the ipass solutions or embedded eyepass, with the integrations, it's sort of this area where you see it as a blocker to start your project or start the integration to get the business value out. But it's just like security, I feel like it's one of those things where you can ignore it upfront and not think about it, and then later it's going to immediately bite you and take down your whole platform because the quote unquote project you started in your company to integrate with whatever that third party provider is is over. You know, maybe the team is gone. And so if you didn't start with the thought in mind of how are we going to make sure that this integration is successful over time, you're likely not in a very good spot where you can just do a little extra work and get the value out. You likely need a real tool to make the difference. I please agree with you, and I'll also refer to what you're saying in a term of resilience and availability. If you didn't thought in advance of okay, something is bound to happen with that third part API then, and you didn't thought of those retris in place, those circuit breakers, those fallback functions. So if one API goes down, you'll switch to the alternative. If you haven't thought about that, and by the way, most companies don't think about it until they encounter those things over time, then definitely, I mean things will be impacted. And the thing is that now these days, third party ap APIs SLA as a direct impact on your products. SLA. I mean, as we said, if there goes down, if they're malfunctioning, so does your product. So that's the mindset that we're trying to advocate to change. I mean, you need to think about your integrations, not in day one, but like in day one hundred when things will begin to scale and break, and what type of resilience have you put in place. That's the mindset that it's being active with the right integrations and with your maintenance over time. Yeah, and so one approach to this is you can do that yourself. You know, you can build in the monitoring, the alerting and and hope that you have the right error handling in place and do that for each API that your applications consuming. But I'm assuming that if we if I use something like Lunar dot dev, that I get a lot of that stuff for free just by using your platform. Right right, I mean worth maybe saying in a sentence about what we do and who we are. We're an API consumption management platform. We're giving that that tool or that platform for engineering teams to monitor, manage, and actively optimize their outgoing traffic. The way that the product works is basically by two components. So you got, first of all, the main thing, which is the egress gateway. This is where all of the API calls will run through or selectively the API calls that you want to manage. And you've got the component that will tunnel that traffic through the gateway instead of going to the actual provider. It will be tunneled or rerouted via the gateway. There's various methods to do so. You can do it with an SDK intersted of the traffic before encryption. You can do with you the company changing the URL and so it will be pointed to the gateway and at the head or on top of it. So there's like multiple ways of routing the traffic. But the main thing is that once the traffic is being routed to the Egress gateway, this is where you're seeing first of all, as you mentioned, visibility for the first time in real time. But what's important with visibility it's not just okay, those are my API calls. It's the actual performance performance understanding of how those APIs are interacting, Like what's their error, what's the latency, what's the duration that it takes for an API call to make it and make it back to your system. All of those things and much more. Those are the type of visibility there is lacking and you need that visibility to have all of those controls that we talked about, I mean, brasilience and keep track in terms of security, how much are you're enforcing security over your API interactions stuff like that. So the Egress gateway is there to provide it with visibility. But then on top of it, because the Egress gateway has actual controls over on request or response, you can have actual controls to modify the traffic, to change it, to shift it to orchestrate that traffic. And now you can enforce egress policies such as cashing API cost to throw party APIs for example, if you want to reduce some costs and avoid those redundant a PI calls, or maybe improve improve latency same aspect, or you want to define your own type of throubling mechanism, like the way that you're pacing, the way that you're making outgoing calls or queuing API calls by priority, all of those things which we can name as API middleware. Those are the things that once you're tunneling traffic via the gateway, you can have active controls upon so you can have better resilience as we talked about, you can reduce some costs, and you can have a better security and efficiency the way that you're interacting with those APIs without changing your existing infrastructure and your existing code. There's something about this that I really like because I feel like we've integrated with lots of platforms as a company and over my career, and something I keep getting back from support requests are, hey, can you send us your logs regarding that? And I'm like, our logs for calling your service? So that that thing that you're supposed to have, Oh yeah, we prone it after like two, three days or seven days. I'm like, I don't I don't know what to tell you, but like I'm not logging what's going on on your side, Like I don't have that ridiculous data. A matter of fact, you know of anything that's exactly the sort of thing that we're dropping intentionally unless there's a problem because I don't want to record unnecessary due look at. Data like that. Uh So you know, there's a there's a really interesting aspect here where you don't want it to be a first class application notion to have to care about storing these requestomer responses from a third party in a way. Either it's application data in which case you're storing in a different way in your database, or it's third party data, which like, don't you have that already that you can you know, use to validate whatever I'm talking about. So there is something huge here for support requests. And I'm wondering whether or not there's like a common offender that you find that like all your customers like, oh yeah, we use Lunar dot dev always with this platform or this platform, like there's a lot of mileage out of it. Can you talk about that? So I think that in terms of platform, I think that like the majority of our customers are interacting directly with the API provider because once again we can think about I mean, at least that's the pattern that we saw up until now. When it's a business critical API, you'll you want to have that direct connection. You don't want to be reliant on a third party that will tell you this and something brokes with your something has been broken with your API. So they're direct linkage with those API providers. I will say that the vast majority of the companies, I mean I'll shift the question a bit, the majority of them always want to start with visibility. I mean, first of all, let me understand what is happening out there. I mean, if there was sprawl, if there's lack give you discovery, give you catalog and give me an understanding in real time what is happening past that phase. You you'll be open to the understanding or the idea that those iterations might have problems and how can you remediate them. But I think that if there's lack a common ground among all the companies that we've seen is first of all, let me for the first time understand how my API calls are behaving, how that third party is interacting, so I can decide later on I want to actually want to shift and change and orchestrate the traffic so it can fit my business logic. I hope that answer to the question because I shifted it just yeah. I think visibility just can't be overstated enough because that's huge because that's the basis of your costs for APIs. It's it's consumption based pricing, and it's it's it's just critical to understand how often you're using these APIs, and it's really hard to do because if you have to implement it yourself client side for every single API that you consume, that's so much ated work. And if you try to get it to if you try to get it from the API provider, a lot of my experience with that has been that it's challenging to get you know, you're getting like four to twenty nine throttled, and so you log into their platform to see what's going on and they're like, yeah, you're getting throttled. I'm like, well, how much, how many requests are making? How long has this been going on? And that type of visibility is not there, and you've. Actually, i think stated not the worst use case, because those are API providers. They will give you some kind of an answer and understanding a dashboard to what is happening. You'd be surprised or not surprised how many APIs, I mean, how many APIs are not third party APIs meaning that the core offering of that company is not it's API. It's just something on the site, something that it has to do. Those are I mean I've heard someone, I've talked with someone just recently and he said, I'm going to classify the world of APIs into two sections. There's the APIs that you want to use, and the APIs are forced or must use. And those forced or must used those are probably poorly documented legacy APIs. I'm going to say, like a bad woord soap based APIs in some and those are APIs that it's hard to manage, it's hard to understand, like what is happening in real time? And and as you mentioned, it's it's it's an ongoing problem. And even if you did have a clear view among all of the API providers, like how much I'm a consuming real time when it's going to be when I'm going to hit those fourty nines, think about about the sheer amount of APIs that need to check each and every dashboard to get that understanding. I mean, it kind of goes show that you need to have active controls. You need to have that visibility from your side keeping track of it. I'm staying like, based on the tier that you've purchased, what is their usage over time when you're going to be bound by those forty nines And as we said, tech active controls upon that data. So if you know that like you're being bounded to hit those forty nine is like a week from now or maybe like a few minutes from now, then you want to change the pace or the order of outgoing API calls so you can prioritize your VIP customers. First, make those API comes first instead of the freemium customers for example. So visibility can go up to a certain degree from the provider side, and the vast majority of them are not giving me that full visibility. But then the controls of what happens and the actions that need to be taken upon that visibility, this is on you. This is the company, this is the consumer of the API. And yeah, this is like it is a thing. It is a thing, and it varies across the industries. You would imagine that maybe it's like just a subset of APIs. No freighting industry, travel tech industry, even like the companies that consume APIs on behalf of their customers. Think of all of the security posture management type of companies that they will do some kind of a scanning and posture management. Those are like, once again, another problem that is being opened up. Everyone has its own type of sickness and problems when it comes to consuming APIs, and everyone has been building their own top of mechanisms to deal with them and to gain that type of visibility so that they can act accordingly. For sure, I think the scale of this problem just gets more and more. I heard of a comment recently, and it's not one hundred percent true, but I think it resonates with some truth that we're not software engineers anymore. We're we just both differ and APIs together to get a different result. Someone told me that we're the some of all of the APIs were consumers. It's like being philosophical over here. That's great. I like that. Yeah. So in terms of working with with Lunar, what does it look like to use Lunar to solve this problem? Like, do you do all of your when you when you integrate with Lunar, do all of your API calls go to Lunar and then you relay them through there? Or or how does that? How does that work? Yeah? So, first of all, Lunar is a self hosted solution. We perceive ourselves as as a mediation there. So we're an infrastructure component. Companies are installing Lunar on their own managing it. You have the control plane. The control plane is as SaaS one, but the infrastructure itself, the egress and the routers of the traffic, those are self hosted. UH. Phase One of the of the product is saying okay, once you install the Egress product, the aggress gateway. Then you choose which APIs you want a tunnel. You can either choose to tunnel everything or from specific applications or just a subset of it. Once it's being tunneled, the first thing that you're getting is that discovery and that cataloging. That catalog has been is going to be filled up in real time based on the actual API calls that are making. The system can detect what type of API calls are being set outside based on the domain, uh, the way that domain looks like. And then once those API have been discovered, then you're getting that first of all visibility. You can either see it from the control plane that Lunar offers or from your data dog and whatever APM you're using. And on top of it, what we're doing this is like the next phase is on top of getting that visible in real time, we're saying, listen, we're detecting problems over here. We're seeing that I mean, you're surpassed a specific threshold of forty nine's for example. And then the active part of the system kicks in because now we can offer you policies we call them clows revidation flows that can have active controls based on that problem. If you're seeing a forty nine an, you want to paste the outgoing traffic in a way that you can define a R right limit policy. It can be a concurrency based, it can be a strategy based where you're holding a specific you know, pacing mind. So it can be also cache in some of those API calls. So every middleware that you can think of, we're implementing as we go into the system so that if you encountering a problem, the system will detect it over time and then can suggest you with a solution. A solution is just enforcing that policy on top of the gateway instead of actually writing it these days in code within your application. That's the way that like the system works, and people are onboarding additional APIs once they're gaining that trust with the system and then seeing additional problem with them enforcing them. Eventually, what we're seeing ourselves is that missing infrastructure layer too much needed. I think cloud native companies they're making API calls, they need something to manage and orchestrate that traffic. And this is basically lunar. I like the fact that it's self hosted. Yeah, I think that it's something that we grew upon I mean it grew on us. I mean I mean that maybe it was an easier decision to start us, but we understood that in terms of we don't don't want to deal with the cost of the outgoing API calls if it was a SAA solution, We don't want to have some kind of a risk in terms of PII being leaked outside like actually API cos of companies, and we want to have as low latency as possible. Obviously this call for a cell as solution, and this is like where the system and the product shifted towards right on. We had a similar discussion actually internally for our products. But since we're not a proxy, two requests, it's a sort of a second request that just goes somewhere else, and so in that regard, it's not a concern for us being a SaaS. But given that a model is having the request go through the gateway, you would need realistically the gateways as close to your customers, between your customers and their users everywhere those users are, which is even more ridiculous problem than what we realistically have. And I can definitely see that I have to ask it's not like you're hijacking tls sor it's to proxy the traffic you have like some sort of HHDP SDK that you use instead of whatever the default rest or HDP client or soap client is that the actual application is using. I guess that's right. I mean, if it's an est gate, we're's acting the traffic and then she depeat just before it's being equipted on that traffic, and then the ntail list will take place from the gateway to the third party appear. Good call out, Warren, it's man in the middle as a service. Well, I mean, you know, it's funny what you're talking about, because almost ten years ago, I feel like we were in this spot where all the problems you were talking about were exactly the things that we were dealing with. And it wasn't about production. It was like, we're calling out from our service to another service internally in our company, managed by a different team, and they can't handle the load that we need to support our customers. And so we added in a middleware in c sharp as it was to basically buffer the request going out to make sure we didn't hose them by sort of rate limiting ourselves and try to do some traffic shaping there, and like that was one of the things we did because they couldn't be trusted to handle real requests, which is just absolutely so ridiculous, And we open source that and like other teams started using it in the company, which is like the opposite of what you think when you want to deal with requests coming in and you're like, oh, well, you know, we want to rate limit them. We were like self rate limiting. And that wasn't the only time, Like, there were a couple other situations. The biggest one is, you know, I don't know how much I want to bring this up, but like one of our competitors in the off space they charge for individual client JWT token generation, and lots of times the application you have on your side of the service or micro service or whatever doesn't have a good idea of how to cash right. There's a lot of concurrency and separate containers being spun UF especially functioned as a service, and so internally there was a team at our company who wrote a proxy to like our now competitor to cash these requests and just return an outdated token if it was still valid. I'm just like, if you know you're running into these situations. You really have a couple of options here. Number one is, you know, please pick a different provider. Like you know, there's multiple third party providers. Some of them don't charge you for that ridiculous notion, while other than are. And obviously the second option is, you know, go build your something yourself to prevent making these calls. And the third one is, I guess you know, now just use lunar debt. That's an obvious answer. Thank you, Warrence, first of all, but I'm with you on that. I mean, first of all, what you're saying is really interesting to think the bigger companies that were spoken with that problem that lunar is position itself like active controls, the third party API conception. This is an internal problem because there's serving so many in like inner API calls that their services are being like they're having their own domains. They're like a third API to the organization. So that's like a thing. And I think that that pattern, as we talked just talking about it, this is like a common pattern. I mean, it's not it's not rocket rocket science in a way, but but the pattern is that companies have been building it on their own. Now we can discuss whether it makes sense to build it on their own. But I think that the main argument that I'm going to make is that why would you build it on your own? Yeah, you can build that type of gateway on your own. It's take it ticket proxy, choose whatever you want, choose engene x, choose a j proxy. You can build it. Question is why, though, I mean why building something to maintain third p d API integrations where your core purpose is innovative and like bring new value to the actual product that you're building, so that maintenance over time, it's it's something that it's like seamless to the I, but over time you're seeing that you're dedicating much more task force to manage that infrastructure, to make sure that it's scalable, that it's not a single point of failure. I mean, it becomes another thing on another animal on its own within your infrastructure. So we're saying in that build versus by dilemma, you should really think technical consideration, how much time will take you to build it and maintain and maintain it, and whether you need to do it as part of your core purpose. Like our claim is that you don't have to know and so you can you know, use it, you can work with us and as you go stuff like that. So yeah, I think that we're trying to shift it partim of you don't need to build it on your own. Nay, yeah, I mean one of the things that really comes to mind is that with larger companies, their products with well oiled APIs, and there's very few of them out there that I'm like, oh, that's a good API. I think one that comes to mind often is strip you probably maybe don't need a solution like this, But there's so many companies out there that are making APIs where it's not their core competency and they're not good and if they offer I can't believe I'm saying this. Some companies don't offer multiple API keys to interact with their APIs. And so if you have multiple like if you have a company a product that interacts with one of the chat services, they have an API and you only get one API key per chat bot, and so what would you do if you had multiple teams in your company that both needed to access that chatbot, you know, utilized services because you have maybe two pizza teams and micro services and so it's not even a matter of building the competency yourself. It's really like, are you going to build this chat bot application proxy just to communicate with that thing? And like what would go into that event? And so it's not like you just build something yourself. It's like you really need a core thing here in order for. It to work. And well, there's an open source solution to just go and cross that chasm right away without having to. I mean, you're not going to build this into your sort. There's no way you can. Split an apike into two services. Like that's not even an option, right, You're going to build a proxy. And some Sorry I like that. I mean, this is a truly interesting thing that we actually seen and because we've heard it from customers who developed it. And I'll explain. This is a problem where you have multiple consumers consuming the same API key. Now all those consumers are sharing the same quota. Yeah, it's so until you've hit that problem, if you want to rate limit based on a single key, I mean, you can do it rather easily by with code in your application. Once you need to distribute the quota among different consumers, you've got to have something in place which is like that that single point which is a proxy or gateway. Now what we've done in that perspective is saying, listen, if we're in possession of that API key, then we can generate subkeys or few or keys. Those sub keys can be find better find grade control, meaning that you can have a subset of the quota being assigned to every key. So first of all, you can have granular controls over that sub key. So now production and staging will consume eighty twenty percent based on your logic for example. But the thing that you're getting on top of it is also a security enhancement because now no longer will the developers will have or the threat of having the actual key in plain sight. You will have always that generated subkey that will be translated within the gateway to the actual key and make those API calls. So all of those mechanisms in place will have to take place with a proxy in the middle. And it's an it's like an it's a non issue. I mean, as you said, this is like chat what is one option, But we've heard it all across the board, so that's like, this is one of the interesting patterns that we saw along the way with API consumption problems. It varies across the industries by the way, first across company sizes. It's one of those interesting takes. And then adding on top of it, think that maybe you want to regulate the traffic based on a cue base, so staging will be prioritized more production than staging, so you want to pace that traffic. You can do all of those, all those type of middleworm manipulations. Was you got that proxy in place or that get way in place. I'm sparking a little bit because you said, you know, there's no there's no other way around this, and I'll tell you one because I've seen it in practice where all the services log their own usage and then export it to a third party system where they try to aggregate the usage backup to understand what's actually happening after the fact. And it just seems so nonsensical to try to log this data within the service because you're like it was bad enough before where you were sharing something, but now you're adding a lot of complexity to your logging an observability system just so that you can pull out this non application related metric that really has nothing to do with the application at all. And I will give you another method, actual method we've heard this is like a company I P O company, a big one same problem, one key, uh two consumers. They're sharing that data via mail, so they're establishing a protocol with email of saying listen, you will not surpass that threshold and I will take and if we'll have some kind of spillover, that will relate to you with another email message saying listen, I've got some of that quota left, so you can use it. This is the way that it works these days. I mean not these days it works for companies are not having that infrastructure mindset. It should it should be an architecture. I mean, this is what you're trying to advocate out there. So yeah, it can it can go. It can go rogue and worse pretty fast. I like the idea of issuing sub keys. That's pretty cool because it is very I don't I would, I don't know if the commons right word. But it's not unusual to see where you need to connect to the production grade API service to get the quality of data that you need. And so then you have a scenario where you have a dev application that they load test it, blue out your quota and now prod's dead for that. If I had if I had a nickel, I had a few nickels, right, I want to want to over exaggerate, but uh, yeah, completely agree with you. Yeah, I mean it's sort of ridiculous that these things exist, though, Like I would really like, Okay, you know, don't have to think about that. PROD and non PROD have different keys for different universes, and uh, there's no way that you could you could have a problem there and you would never need to share keys between different consumers because each consumer has their own key which has their own individual permissions there. Like that seems so table stakes to me. But I think I'm heavily influenced here because you know, we offer like API keys as a service within our own product, and I know not everyone's using it. Like I wish I could say, oh yeah, everyone's using everyone in the world using our products, so they have API keys as a service, you know, out of the box. But they're not like and these APIs like they're made. As you said, every industry has some technology which there's some way along the technology adoption curve for really API understanding like lax documentation or not even using JSON or binary format like compressed binary format to communicate, but they're using XML or SOAP or something else more ridiculous, just really unfortunately. I want to go back to under the rock that I you know, maybe live on der where it's a little bit nicer, where people aren't doing this. Yeah. So, you, Warren, you brought up the chatbot idea, and it made me think of the show prep notes that we had prior to the show, and in there you've got AI driven APIs. So can you elaborate on that phrase because I'm super curious to learn more about that. Yeah. So, first of all, I was looking at the and the counter of the of the recordings that we started, and it's minutes forty one since we brought up AI for the first time, which is like, which is cool. Yeah, I think that maybe like the way that that I want to approach that topic is not necessarily AI APIs, but the coupling that is now taking place and will take place with companies trying to embed some kind of an AI offering to them. So obviously, like the vast majority of them will are and will integrate with aiapis. I mean, you got a bunch of them these days competing over cost and accuracy. And the thing is that we can look at AI APIs like open AI and Gemini and whatnot as a subset of the problem that we've been talking about for the past forty minutes, which is like API consumption problems. But if you were to like dive a bit deeper, then we'll stand that even though there are another set of APIs, there's like something pretty unique to them. It's either the tokens that it's the currency that you're using. So every API cod being made with the prompt can be boiled down to a number of tokens, which will correlate with costs. And this is a problem on its own. I mean, how do you keep track over the tokens being consumed? How do you have active controls and regulation on API code based on the number of tokens the consumed and used. So this is like one aspect of it, and the second of second of an aspect of it is security, which is like we just I think stretching the surface. I mean, how do you make sure that you're not abusing that AI capabilities to extract something that you weren't meant to extract if you're I mean, if there's a malicious actor. How do you make sure that you're not like sending over those API calls sensorive data that you weren't supposed supposed to send. Those are like things that begins to be pretty unique with AI APIs that companies need to think upon. And I said that the last thing or maybe like the third thing is cost because those are cost the APIs. I mean you have to have active controls and predictions on how much will it cost you to consume those APIs, And then you need to do like the second phase, which is okay, if that's a cost the API, do I actually need to make that API call to cheat GPT like to GPT four, or maybe I can have it to maybe like a less expensive API, maybe like open source type of APM, maybe like hiding face or stuff like that. So a lot of consideration that needs to take place, which pretty unique to AI. And you need to regulate that traffic as you need to regulate it with other APIs out there. And I think that this is like the next phase of AI and APIs and the problems that associated with them and will only grow over time. I think you're underselling it here actually a little bit. If I think about it just a little bit longer. You actually have the capability of altering a hypothetical system prompt to restrict the responses back from AI enabled APIs, like hey, you know, prefects, whatever the prompt is that's being sent with, and your response should be less than one hundred characters or a hundred tokens long. And then you're sort of guaranteeing that extra security in there, which is something which isn't really necessary with non AI enabled ones where they don't usually count per tokens. It's usually maybe it's by data. Out, which isn't so common, but for sure way more common in the I space. Completely agreeing with you. Yeah, and you can also talk about hallucinations and stuff that really lacks sicknesses of AI APIs on their own and how you want to deal with it from the application that I mean, from the API calls perspective. So that's lack just the beginner of things. All we know that if you had to, if you had an infrastructure that will give you controls over requesting responses, and you can see that full visibility including the payload, which within the payload before it's been improveted, you can see the actual prompt and everything. Then you can have smart decisions that can take place from a security standpoint of view, you from catching a standpoint of view, you know, usage bay pattern stuff. So yeah, that's like another aspect to it. Yeah, that's crazy because when using uh an ai tech service like that, the scope of what you could send that is just limitless. And since there's direct costs associated with that, that's a whole new budgeting paradigm of evaluating is this the right use of this API or or should we do were doing something different there? Exactly. There's an ongoing debate whether over time you will need to shift between AI A PI, so let's say Gemini and open AI and Mistral because of accuracy and costs, you know, and based on your business logic, you will decide for those apichos, I want to have them vieah Opening Eye because it will be less expensive or maybe more accurate or the other alternative. This is like one aspect that maybe will unfold over time. But the other claim is that actually those AI APIs will converge and the best, like the big players out there, will kind of even out in terms of accuracy and cost, and then the only player that you have to play is with the niche. AI APIs those are that has been trained on a specific model. They will give you that fine brained offering to it. As I said, like, it's still the early days. We're not none of us know at that point of time what will be the way that companies will consume AI APIs at scale. All we know is that they will consume it at scale. They will have to regulate the traffic, have proper controls, see the prompt take actions accordingly, have visibility. So there's like a lot of things to be unfold. I think over time and I'll spark. Another thing here is that we're apatam. Now we've been talking about direct consumption of AI. I mean you're making some kind of a call and that has been streamed as an API called to to opening up for example. But the next phase that people were talking about and just starting to you know, unravel is those AI agents, which means that an agent will be able to make other API calls based on IS integrations with other systems that you have within the organization. So that AI agent will spark, like propagate a lot of API calls from other API integrations. Think about the sheer amount of API traffic that will be sent via that uncontrolled AI agent, that component that is not even like a human. This is like something that embedded within your own product, and now it was making all of those API calls, connecting so many places, making those I do you keep tracking it? So this is like another I think challenge on its own that will unfold over time. I mean, I think part of the struggle here is the value maybe not so clear, and so by the companies offering AIS as a service or APIs that give access to an AI as a service because otherwise they probably would charge by value and not by tokens. And then I think we'll get to a better state. But we've already gotten to the point where niches have popped up, things like sentiment analysis or image generation or so like. There are already this case where a single prompt may focus going into a different direction based off of whatever the sort of data that we're looking for. So I think both of your hypotheses are It's not a duality. Both of them are true. We'll get niches of AIS of having to select the appropriate model at the right time depending on the type of data that you're looking for. But we're already at the point. I think of the other fact as well, where individual companies are being more commoditized, like you like, oh, I don't care if I use Gemini or Opening Eye or Anthropics Claude model, like they're all like, you know, one's better than other. May be. I think those companies that have the most money, though, will be the ones that go forth and just do the Aw's better Rock thing, which is we're just going to call all the models from every company at once and use it to compare the results, because it offers a couple of different strategies. Actually. The first one is you know that the result is sort of higher probability to be right. The second one is that if one of the models has a security vulnerability, the other ones will actually refuse to return the same answer, so you'll know that there's a problem with the request of the prompt that's gone out there. So if you're just passing along the colors, you know, a data entry. But I think positioning. Yourself in this in this spot here does have a huge sort of untapped security aspect that I imagine today people may not even really be thinking about, but they should be. Yeah, an interesting perspective. What you said out there pretty cool. I mean I'm assuming that not of the not all companies will have the ability to just make a course to every API provider out there, But that's interesting. This This is also a way to, i think, to battle with hallucination in the way I mean take the answer of one model and then round the same like same thing on top of the other model and compare between them. I mean, if you're interested about this topic, there's a great GitHub repository that was created by Remy McCarthy and Clinton Gibbler of a TLDR where they went through like fifty different AI related papers, recent ones about how to deal with problem of the am models and real strategy is take the output from the model and pass it to the input of another model and say, hey, like regenerate the prompt and do this cycle or you know saying you know, split the the prompt into two pieces and run it against different models and then combining the result. And you do multiple of these things over and over again, and it sort of eliminates the possibility of malicious actor. So like, if this is an interesting topic, like there's there there's so many of these are like wow, you could really be doing a lot of interesting things in this space. That's really cool. Yeah, it is truly interesting. That's going deep down the rabbit hole. I gonna go deep down there. There's this article that came out that said that still no companies are making money from AI in any way, shape or form other than I guess in Nvidia right for selling the shovels. Yes, I think that every time that I'm speaking about like AI, AI, A P I is the future of it it all. Eventually it will sound like a Bacon Morphy episode along the way, and that's what I'm saying, Like, Okay, I gotta unfold, like I gotta go back, I gotta go back. I mean basically, there was that one episode where it was like the story worry within the story with the train dude, and the end of it was like a heist and he's like, that's what I want to do to think, That's what I want to do to think, and they were like just stuck at an infinite cycle here. That's that's for sure where we're going. Exactly unsubscribe. I'm one off. So I want to shift topics here real quick. Let's talk about running. Let's stuck about running. All right, let's do it. Yeah, so I take your runner. I am not a pro one, but it's something I've been doing for since since adolescence, right. On, Like long distance, you have a specialty, favorite thing to do. So I think I'm trying to keep the casual. I've been running like as much as I can, like two or three times a week and usually, so I'm going to go with the metric system. I can't relate it to miles fair enough, apologies and advanced to all of our audience. I don't apologize, but it's like between ten to fifteen and Kate. So yeah, this is like a way to clean your your mind and find teeing yourself and just like you know, really stress something I've been doing for a lot of time. It's like my own type of meditation. Yeah. I'm glad you went there because I did an experiment last year with running and that was one of the big unexpected learnings I had is that running really is meditation. And when I when I first started running, you know, I had music going and stuff, and then over time I just ditched everything because for two reasons. One because running long distance running was such a meditative experience that whenever you got done, it was like, wow, I feel so much better despite being physically hired. But the other thing I learned was like, there's actually a lot to keep track of while you're running. You know, you're focused on what's my run cadence? Look like, where am I landing on my feet? Is my is my stride? Right? What's my respiratory rate? You know, and there's so much, so many things that you need to focus on to do this effectively. That if you have music playing as a distraction, then you you're you're running quality suffers from it as well as you don't get that great meditative experience out of it. I agree, Yeah, sorry, go ahead. Sorry the you know, I used to do this and I had the arran thoughts like that would be like, oh oh no, like there is this bug in production somewhere that I would like, like I wrote code that you know is a problem, or I just thought of this thing that I have to like I have to write it down, Like I'd just be like an hour later, I'd be like, Okay, what was that thing that I figured out while I was meditating? Uh, you know, while running? And it was like a huge problem and I never really really got over that. So I think that if you were to multiply the duration of your run it was you, you will go past that. What was what was I thinking about? And it's like it doesn't matter. I mean everything is a federal in life. I mean, I mean it doesn't really matter. I mean it's okay exactly. Yeah, So I know, like I want to. I think that's the right time to surface it that you're saying. Like usually in your episodes, the person that it's making is being interviewed as some kind of recommendation to make, And I thought about what I want to want to recommend, and that's this. There's this book that I've read a few years ago, one of my favorites. It's called bon Bone to one right, and it talks have you heard about this one? I've read this one is such a great book. I love it. You heard about it? Robin? No? Actually, so this this, dude, it's like an actual story. It's not something fictional. It's fictional. It's not science fiction. He is pained by running, He want to gave up on that is a journalist and then he heard about this tribe in the in Mexico in Mexico, in rural Mexico, that they're running the long distance miles. I mean as a way of living. They don't have cars. They're just running for a living. And it goes out to this to this tribe and he like discovers that they're the ultimate ultra marathon runners. I mean they can run for miles over miles without getting tired. There actually can beat the vast majority of the pro ultramarathon runners. And then is investigated over time you understand that like people are actually born to run, I mean one of the old methods of hunting. And he founds like a tribe that actually still doing it in in Africa. They're running until they're like until that the cattle or the target is being you know, exhausted. Because humans as opposed to other most of the mammals can really have endurance over time because they have sweats so they can ventilate everything. Most animals will have to breathe to do so, and it's like breathe per gallop because there are four legged animals, so they can outpace you at any time. But eventually, like fifty k of running, they will crash down being exhausted and this is where you're going to stick your ar, you know, sphear and take take me down back home. So kind of saying that, like at the gist of things, people are born to run for long distance runners in our anatomy, and we just forget about it along the you know, over the course of years. Yeah. I really like that format of book because the author and I can't remember the author's name, but the way they broke the book up was it's an entertaining story in itself, but then they would break up the sections of the story and go into the science of why that part of the story was true or how that part of the story works. And so the book would switch back and forth between here's like an entertaining section of the story, here's the science behind that, here's the next part of the story, here's the science behind that. And so it was both entertaining and educational at the same time and just a fascinating read. Completely agree with you. By the way, this is the book that sparked the barefoot. Running trend, right, so do you run barefoot? No way, I'm not of the trend. I'm just like talking about the trends, but I'm giving my nikes and I'm good to go. No, it's too painful to try to adapt to Breft trying for sure. Agreed. Agreed. Now that's that was a cool book if even if you're not interested in running. It's just a really cool story because it talks about you know, humans historically and and how we how we survived up until this point. And then there's just the educational takeaways as well. Cool pick. Thank you, m Warren. What'd you bring for a pick this week? My pick is I didn't have something relevant, so it's it's gonna be uh, definitely separate. It's a book again, because you know I I go through my whole book lest and it's never split The Difference by Chris Boss Yeah, uh yeah. I mean it sounds like it's a book about negotiation. Uh and I don't think that's accurate. I really like it as a mindset shift for when you think there are only two computating outcomes for a situation. I like this salary negotiation as an example. It's like if. You say, oh, you know this, I want this much money and someone else says no, like you know, we can only pay you that much. It's sort of a not a smart way to approach the conversation. Instead, you want to attach every extra dollar, for instance, in a salary negotiation, to something concrete, like I'm worth this much more because of my experience. You know this money. Number of years of experience is worth this much. This many more years is worth that much, And then you're sort of agreeing or disagreeing about the relationship between something concrete and the money in a salary negotiation rather than arbitrary like, oh, you know, let's just throw out arbitrary numbers. I think that's really important because it's really gotten me to shift my focus when I feel like I may be in a negotiation situation about what we're actually talking about and where my value is personally or where the value. Doing something is. Especially like evaluating software products comes up a lot. It can be very difficult to evaluate, you know, which ones we want to use, which ones are good, And if people just get into a conversation of like, oh this one's better, you know, do this and not do that, it doesn't really come out that well. But if you're able to evaluate, okay, we will do that, if this or if we need that, it really has the extra mindset, and I think this book really helped me get there. And that's another one of those books where Chris Voss, the author, his background, he was a former FBI hostage negotiator. Yeah, and so the book format is very similar to Born to Run, where he'll tell you this cool story about this hostage negotiation scenario he was in, and then he breaks down the specific technical components of negotiation that he used in that scenario. So it's that two part thing, you know, where here's a really entertaining story, here's the educational component. Here's another entertaining story. Here's the educational component. Just that format is so cool to me. I bet you really like The Martian. I don't think I've read that. Oh no, like see the movie though. Oh okay, So there is this cult favorite story out there that's been out there for so long, So andy, where is the author? And it's called The Egg, I think, and it's it's like a philosophy on like what life is, and it's so old. And then I read The Martian and I'm like, this is great. And then I found out that I had actually read The Egg, which is just it's like translated into like thirty languages by people. It's a very short short story, but it definitely goes into it like it's it's very science based, but there's some story and then the main character is telling about the science of why they're able to do what they're doing. And so in the Martian he's stuck on Mars and he's in a precarious situation, and there's a second there where it's like, oh, I need to grow potatoes. It's like, okay, let me tell you about the science of growing potatoes. And then I did read this book. Yeah, I did read it. Yeah. I didn't recognize the name because that was the movie with Mark Wahlberg, right, No, made Damon. I think Matt Damon, Marky Mark and the Funky Bunch or whatever his name really is. Yeah, yeah, yeah, for sure. So I think it's really well done from that regard. Yeah, but my pick officially is going to be Never Spoke the Difference. Yeah cool. Both of those are great books though, Yeah, super cool. All right, So my pick, I'm going with a video slash music pick this week because the YouTube algorithm has figured out that I can't resist watching reaction videos to musical performances, and there's a YouTuber called the Charismatic Voice and the specific video I watched, I've watched a couple of them. The one that I most recently watched was her reaction to the Iron Maidens song The Trooper. And so she's a classically trained vocalist and she's watching this heavy metal video. You know, Iron Maiden is straight up heavy metal if you're not familiar with them, and reacting to it. But the cool the part that makes us cool is she just gets so into it because you can just see her passion and her enthusiasm, which makes watching the or listening to the song that much more enjoyable. So she just has a great personality. And then also same thing, you know, to watch part of the video and then stop and then give you the educational or the technical breakdown of that specific piece of it. I think I'm picking up on a pattern here of things that I like. But anyway, so YouTube the Chromatic Voice, the reaction to Iron Maidens The Trooper was really good. Or she did a reaction to the led Zeppelin song. Um shoot, I'm drawing a blank on the name of the song, but there was a she did a led Zeppelin song and she just lost her shit in that. I mean, she couldn't stop giggling. She was enjoying it so much so it was really cool to watch. I think you actually did the led Zeppelin song as a previous pick because of her. Could be yeah, very well, could be cool. All right, Well that's going to bring us to the end of our episode. To all of the listeners, thank you so much for listening. Be sure and let us know on LinkedIn X or whatever your social media platform of choice is, or if you Google for I don't know, probably forty five fifty seconds, she will find my email address. Feel free to shoot me an email as well. Like if if every scammer on Earth can find my email address and you can't, I don't want to hear from you anyway. I'm kidding. I am totally kidding. But thank you for listening. Hey all, thank you for joining us on the show. This has been a cool conversation. Appreciate having me on the show. Thank you so much for having me. It was awesome cool. Larren, as always, thank you for being here. Appreciate your added input and helping me out on the show. Yeah, of course, all right, and we will see y'all next week.