LIVE from Manifest 2026: Shipium CEO Jason Murray reveals why AI transformation isn't about making old processes faster but fundamentally rethinking workflows. From turning three-day analytics tasks into minutes with Orca to exploring adjacent areas such as auditing and consulting, Phillip, Brian, and Jason unpack how domain-specific AI creates competitive moats in an era when traditional advantages are dissolving.
.png)
LIVE from Manifest 2026: Shipium CEO Jason Murray reveals why AI transformation isn't about making old processes faster but fundamentally rethinking workflows. From turning three-day analytics tasks into minutes with Orca to exploring adjacent areas such as auditing and consulting, Phillip, Brian, and Jason unpack how domain-specific AI creates competitive moats in an era when traditional advantages are dissolving.
‍
Have any questions or comments about the show? Let us know on futurecommerce.com, or reach out to us on Twitter, Facebook, Instagram, or LinkedIn. We love hearing from our listeners!
[00:00:00] Phillip: Hello, and welcome to Future Commerce, the podcast at the intersection of culture and commerce. I'm Phillip, and today we are going live to an episode that we recorded earlier this week at Manifest 2026. We partnered with our friends at Shipium to cover a brand new conference, one that we had never been to before. Manifest covers the shipping and logistics industry, and I saw innovation and automation a scale of which I have never seen before. It's an incredible show floor. They have drones and skid steers and automation, self driving vehicles, and things that I think we'll have a bigger report on the newsletter and in our insiders long form insights piece. You should go check that out. Subscribe if you're not already subscribed at futurecommerce.com. But, uh, we got to sit down with Shipium founder and CEO, Jason Murray, to talk about the future of that industry and their new unveiled tool, their brand new announcement, Orca, which is a predictive analytics tool that is backed by AI and how their their AI insights are powering a whole industry to see into the future. And their customer list is impressive. From Gap to American Eagle to Duluth Trading to Ryder, they are partnering with some of the world's most recognizable brands to bring flexible operations to scale. So we go live right now to the show floor over at Manifest 2026 in Las Vegas, where we sit down with Jason Murray to talk about the future of shipping and logistics. This is my first Manifest?
[00:01:37] Jason: Yeah.
[00:01:39] Phillip: Wow. You know, coming from e commerce where things have been a little stagnant for about a decade. Yeah. Brother, there this is this is wild. That's cool. It really is cool. I could talk endlessly about you know, we keep hearing about hype about things like automation. I sat in a bunch of main stage sessions, and I really believe that it's real here. Yeah. It's actually happening here. We're talking in our industry. You know, I hear you know, in the marketing space, I think there's a lot of, like, hope.
[00:02:14] Jason: Yeah.
[00:02:14] Phillip: Here, it's not hype. I think it's actually happening.
[00:02:16] Jason: Yeah. I mean... The Supply chain space has a really good...it's like a little more, the KPIs are clearer. Right? And then when you have that, it lets you kind of hone in on what what you're really focused on. I think the problem, like, the fuzzier stuff suffers from is, like, marketers can't really agree what success is. It's a hard problem. Right? I mean, I you know, so it's it's definitely not easy to to say, like, "do x y and z." But, anyway.
[00:02:46] Phillip: That's generous of you, that's generous of you. But no.
[00:02:50] Jason: I mean...It's just like you, you know, like, I know what you're talking about though. It's like how many times can recommendation engines be redone––
[00:02:59] Phillip: Yeah.
[00:02:59] Jason: ––you know, over the last thirty years or whatever?
[00:03:01] Brian: So true. Yeah. And like you said, Phillip, the AI impact here is real. In every session I’ve attended, people are saying the same thing: it’s having a meaningful effect on their business. And I think about Shipium—and by the way, thank you guys for having us out to record with you. This has been really fun. You’re actually looking at the next generation of how to apply AI right now, more so than many of the other vendors here who may have “.ai” at the end of their name but aren’t truly thinking through what it means to model shipping scenarios—using AI to generate all these potential options across carriers, warehouses, and the entire network in a single view—and then letting AI run simulations against that. That’s super cool. How are you thinking about this? I’m curious what your perspective is.
[00:03:59] Jason: Yeah. I mean, I think the plot is really about going back to basics. Right? You ask yourself, what problem are you actually trying to solve? And my experience with this was that the foundation of the company has always been a deeply embedded optimization platform. It’s called by other systems, and it uses what I’d describe as machine learning, data science, and operations research to make better decisions. The intent is optimization—saving money, increasing revenue, improving outcomes. It works. So our first pass at AI came when 2024 hit and everyone was watching the space. We started asking, if we have this technology, how do we apply it? Our initial thought was to take the existing problems we already knew how to solve and try to improve them with AI. Honestly, that was a dismal failure. And that’s not to say we haven’t made huge strides in areas like coding—I could talk about that for hours—but from a product standpoint, in terms of what we’re actually selling, it just didn’t perform better than a regression model or a random forest simulation for modeling transit times or delivery promises. Fundamentally, we realized AI isn’t going to work well unless you rethink your workflows, rethink the problem you’re trying to solve, and rethink how you’re solving it. Where we ultimately landed was taking a longer-term view. There’s all this activity happening around our platform—around what we do—that we had been completely blind to because, in our minds, we thought, that’s a person problem.
[00:06:04] Brian: Yep.
[00:06:04] Jason: I just turn it off. I’m not going to deal with it. But when you think about auditors, consultants, analysts at the most junior levels—just the day-to-day work our teams are doing to share data and communicate insights—you realize we weren’t even framing the opportunity correctly. What we actually need to do is empower this entirely new group of users. It’s very adjacent to what we already do, but the point is that I don’t think this is about redoing what people were already good at. My personal view is that it’s about moving into new territory we haven’t explored before.
[00:06:49] Phillip: Is there an example of that? Did the nature of the problem in the space change, or did the way you’re trying to solve the problem change? The customer hasn’t really changed, and the solution hasn’t fundamentally changed either, right?
[00:07:01] Jason: Yeah. I mean, I think I think where you end up with is just like, you know, take...Let's just take audit, which is kind of adjacent to what we do. Right? So audit is for a same industry. It's a very fragmented industry. It's very consultative.
[00:07:14] Phillip: Right.
[00:07:14] Jason: Right? It requires ingesting lots of unstructured data and working with loose rules about how you evaluate things. But in our world, we’ve traditionally stayed away from it because we didn’t want to manage a large group of people. Maybe we could outsource it to save money, or build some processes around it, but historically it’s been a very people-centric function. But then you introduce AI and say, well, we already have a digital twin and models for how all these costs look. Now I can bring in that unstructured data, create rules and workflows within a context specific to the space, and train the system based on what humans are doing. You can build these learning feedback loops with human interaction, and all—sorry.
[00:08:07] Brian: Didn’t mean to interrupt. No, and all of a sudden you’re operating with real context. Because I think what you’re describing is expanding the scope of the problem. You couldn’t really do that before because everything was constrained by human-limited systems, to the point where no single person could hold it all within their scope of thought. With AI, though, you can ingest all of that information and simulate outcomes to a theoretically infinite degree—run nearly unlimited simulations. It almost reminds me of that Black Mirror episode, Hang the DJ. Did you ever see that?
[00:08:41] Jason: I don't remember the name, but I watched all the Black Mirror episodes.
[00:08:43] Brian: The one where they do the dating simulation, and they think that they're like real people in the simulation...
[00:08:49] Jason: Yeah.
[00:08:49] Brian: But then they just keep...
[00:08:51] Phillip: Best example.
[00:08:52] Brian: It is. No. It's like...And it's guaranteed to find the right match as a result.
[00:08:57] Jason: Yeah.
[00:08:57] Brian: And you eventually find the scenario that makes the most sense after they run it through all of these simulations.
[00:09:03] Jason: Yeah. Well, I think to this point, like, interesting thing is human centric processes, they take a long time, they're very expensive.
[00:09:11] Brian: Yes.
[00:09:11] Jason: They're not repeatable, and that's really, at at its core, that's what's so intriguing about AI.
[00:09:16] Phillip: Right?
[00:09:16] Jason: Because you can—if we’re going to move into these adjacent spaces, we have to be faster or cheaper, or both. You start looking at things that used to take a long time, and now you can run those scenarios in seconds. And once you do that, you get into the classic Jevons paradox: when the marginal cost of evaluating something drops—when it used to be an hourly rate—suddenly what was expensive, even if it was outsourced labor, becomes far less constrained. Machines still cost money, of course, but you reach a point where you can run thousands of scenarios as your default approach. That’s really what’s important here: this is a fundamentally new way to approach the problem. It’s not about making traditional auditing incrementally better; it’s about rethinking how you audit altogether. If I’m going to evaluate something, I’m going to do it in a completely different way. So analytics, consulting, auditing—they all start to uplevel. But to really take advantage of AI, you have to rethink how the work gets done. And I think that’s the core reason adoption can take time.
[00:10:54] Phillip: Yes. Yeah. Well, what we’re seeing in a lot of industries, especially on the professional side, is there’s a lot of excitement around embedded AI in tools. We’re seeing that. We’re also seeing folks developing their own workflows and day-to-day processes—so you have your own interesting copilot, your own agent. I saw a lot of agentic discussion in the main session, and they were talking about the future of having all the context in your business and having that pilot. What you’re really talking about, though, is something even beyond that: real-world modeling. Someone might say we’re creating scenarios, which reminds me of predictive analytics from years ago that never quite paid off—but this is predictive outcomes, potentially. Of course, there are limitations, especially with LLMs; there are things they can’t solve. When you get into the intricacies of supply chain, zone skipping, and carrier surcharges, that’s where we’re seeing this later stage of verticalized AI. Talk to me about how this space is maturing. Verticalized AI—we’ve kind of been there—but where are you heading beyond that? Audit was one factor. Do you see this as Shipium having verticalized AI specifically for Shipium, or is this really expanding into these specializations per product area?
[00:12:47] Jason: You know, I’d say some of this is still in flight, but loosely, what we’ve seen so far, under the boots on the ground, if you will—I think at a high level, the horizontal public internet LLMs, ChatGPT, Gemini, whatever—they’re limited by what you can get out of a forum or public data. And I think you alluded to a couple of points: in order to actually make them useful in a given domain, you have to provide context. There are lots of ways to do that—retrieval and augmentation models, RAD models, vectorizing, prompt engineering, creating datasets to build on top of these systems. We even have someone at Shipium who was working on how to adapt an LLM for Shipium—making it work with less-used languages, for example. So there’s actually some tweaking of LLMs in certain cases, but all of these are really just techniques.
[00:14:02] Jason: It's kind of the how you get there. What we've seen is that there does seem to be a lot of reuse between kind of an outer domain. Right? So the way I see it kind of falling is we're gonna end up building effectively like a shipping or supply chain-oriented LLM that's gonna act as as, like, that core set of things is going to act as a really powerful kind of context layer to help decode information in this domain. And you end up kind of reducing the hallucination effects, significantly. It helps with just more of the boring stuff like regulations and, you know, just kind of The necessary...It kinda keeps you out of those things. You can put boundaries around it, but that's at least what we're seeing. Like, we we we just, you know, we kinda did the normal thing where you say, let's do a specialized project, but then out of that project usually kind of comes two or three innovations that can lead to other things that are kind of adjacent in the space. So you
[00:15:06] Phillip: You sort of defined what sounds like a useful copilot. Where do you think the industry is overpromising? Is are there overpromises happening at the moment?
[00:15:15] Jason: I think that the joke is kind of... I mean, I think there's a lot of...We found at least that you have to be clear about what the problem you're trying to solve is. And at least for the moment, there's a there's kind of still humans in the loop as you're as you're setting it up. Right? And I think, you know, as you go through this, you could imagine, like, these steps continue to get automated. You're gonna remove things, etcetera. And it...but it still is, the human is responsible for kind of the context. Right? And then the copilot is acting as that kind of the thing that's that's farming off tasks.
[00:15:59] Brian: Totally.
[00:15:59] Jason: Probably coding is the best example if you want to look at the future and see what this might look like. I actually got really into coding a couple months ago just to try all these tools out, and my experience is that, yeah, they’re all helpful, but you still have to understand what you’re putting together with the current set of tools. It’s possible there could be a big jump that changes this, but again, my boots-on-the-ground experience is that you still need to provide the overarching structure, and AI just speeds up the busy work. I remember coding in the nineties—you’d figure out what you were going to do and then spend two weeks actually punching it into the computer. Now, once you have the context, it’s very fast to lay out the solution. Even in that space, you’re starting to see things like writing tests in parallel or running multiple things at the same time, which shows the process itself is changing. But the idea that I can just hand everything off to AI and be done? I think we’re still a ways away from that.
[00:17:20] Brian: Yeah.
[00:17:21] Jason: That would be my over-promise.
[00:17:24] Brian: And I think that's probably like valuable. As I heard in one of sessions yesterday, you may want like, you may want your car to parallel park for you, but you wanna know how to parallel Yeah. Like, don't wanna forget that or lose that muscle as an Orcaization. Because the moment you lose that, if you get into a situation where, you know, you're you're you're not your autopilot fails Right. You can't do it. Right. But but also another thing I heard was is, you know, you talk about some of these people that are in these supply chains at different levels. They've been in there for thirty years doing mundane tasks that we all wish could be done by machines.
[00:18:03] Jason: Yeah.
[00:18:03] Brian: But the kind of the thing that I heard was, what do we lose when we hand over the mundane? And I think the answer to that is, actually, there's still gonna be, like you said, a lot of humans in the loop and a lot of things that humans have to do. Yeah. But the data that they have to do them Yeah. Is significantly better. And the training available and the way to optimize doing them gets so much better. And so I think reporting and, like, data quality, the speed of which we can create good data in these supply chains is gonna I think that's actually gonna become the focus for for enhancing domain specific AI. And I think to your point, domain specific AI is the future. Right. The job is gonna be making sure that the data that's put into them is representative of the reality.
[00:18:58] Jason: I think it’s a really big open question right now. I mean, what you’re saying makes total sense—that’s certainly one aspect of it. But the challenge of, like, how do you hire a junior supply chain person these days, or a junior coder, or—well, that stuff is just… those are big, gnarly questions. Traditionally, the way we’ve taught people to do things is to have them do the most mundane work, learn through repetition, and then gradually move to more abstract tasks as they advance in their career or in the organization. I don’t have a great answer for how to teach people in a context where you start as the architect. Or is it just a matter of, like, they don’t teach kids cursive anymore? Does that matter? What does that mean? I don’t know. When I was a kid, we had to do pages and pages of math problems just adding numbers together, and I’m not sure how much that helped with my career as a mathematician, or if it could have been accelerated in another way. I do think it’s a really interesting problem, though.
[00:20:28] Brian: It is. Yeah. How do people learn and what like, what is the nature of of, like, human development? Right.
[00:20:34] Phillip: Right. Maybe maybe not Opus 4.6, but maybe Opus 6, we can get to a place where we're modeling, you know, an entire person's learning journey. You hold out cursive. Let's figure out what happens to them. I don't know.
[00:20:54] Jason: Yeah.
[00:20:54] Phillip: Yeah. So when we're...I'm sitting in a bunch of these sessions, Jason. I'm gonna go to yours in just a little bit. I can't wait. So I can't really talk about that just yet. But there's really some interesting content that I've taken in already. Things that I think probably are, I don't know what I'm gonna say...like, probably run of the mill for you all. But I sat in one session earlier today around Ocado's AMR solutions.
[00:21:31] Jason: Ok.
[00:21:31] Phillip: And it was—you know, effectively in any other room I would have said, “Oh, this is like a sponsor delivering a case study from the stage.” To me, my mind was just getting blown. It was Monique Apter, the CRO, was sitting on stage, and the case study she was sharing—I just couldn’t believe it. She was talking with a small distributor in Saint Louis, Missouri, about how they went from handling 12,000 packages in 2017 to around 7,000,000 in 2025—all because of robotics. And during that time, their labor costs only went up 9%. What really blew my mind was this quote: AI without context eliminates work. There’s an interesting, very blunt conversation here about the fact that cost controls are required to run the business, human labor is still needed, and enablement through AI is the only way to hit the scale necessary to meet consumer e-commerce demand—something that simply can’t be done with the traditional workforce.
[00:23:06] Phillip: So we have to scale. And I’m just hearing that kind of thing from a stage—I’ve never heard it delivered with such frankness and honesty on the marketing side of the business, certainly not in our side of the industry. I’m not sure what there is to learn from that, but I’ve heard it now in two or three sessions. It’s that sort of conversation. And I think the context too, Jason, is that they seem to be speaking to a higher-level buyer here. It’s like, how do you sell this to your board? How do you sell this to your executives? To me, this is all new and enlightening. I’m not sure if any of this lands with you, but these sessions aren’t just circling AI—they’re also highlighting that there’s a level of conversation driving technological adoption and adaptation in this industry that I didn’t realize existed. Talk to me a little bit about that buyer—who’s here, who’s in the room, who you’re encountering, who you’re talking to at the booth, and the kind of content you have to deliver from the stage in your upcoming session. Right?
[00:24:26] Jason: Yeah. I think...I think that most of the people here are are relatively senior. Right? And I think they're looking for things that are gonna move the needle. And it's a very logistics and just this all of this stuff. Right? It's very bottom line oriented. And I think, you know, I actually think one of the challenges we've always had as an organization is usually logistics will be kind of siloed as a the cost center of the company, and then there's, like, the revenue center of the company.
[00:25:00] Brian: Yes,
[00:25:01] Jason: And, I mean, that's a whole another we can get into this. Right? Yeah. But we we, you know, we spend a lot of time on, like, you know, this trade off between these two things and how do you get the two hands of the company talking... And that problem was kind of de facto solved at Amazon because of how we organized and how we came up. But a lot of these companies don't think about it that way. But all that being said, I just think that people are under tremendous cost pressure. Right? And there is a kind of no bullshit attitude towards how do we make real changes? How do we get real cost out of the system? You have to be very diligent about how that all works. I mean, I think our solutions— I use the word “optimization” because it’s exactly that—we’re trying to optimize some outcome, whether it’s more revenue-focused or cost-focused. And then I think you just have to be really diligent about hidden costs. You can decide to build something yourself, but then you incur the maintenance and everything else that goes along with it. We have to, of course, present that to them. And then on the physical side, it’s like, do I offset labor with something else? Maintenance or, you know, problems or kind of exceptions or whatever else. Right? So I think I mean, it's I'm ten years out of date now, but the thing in Amazon was when we did the warehouses, when we kinda moved to the Kiva stuff, I don't even think we really got a huge bump on kind of total product total productivity of the given warehouse. But what it did do is it removed the labor constraint from the warehouse. Right?
[00:26:46] Brian: Yeah.
[00:26:46] Jason: And that was a huge problem because they...if you're in retail, you have this you have this, like, peak season, which you have to bring all these people in to run these processes, you just literally cannot find enough people to keep up with the scale. So it's kind of the only way out, is is find some sort of automation or, like, solve this. Right? And and but, you know, all of that goes into the buying decision, and everyone is acutely aware. And I think, um, the other kind of thing floating around is that, of course, if CEOs see that this new technology is emerging, you know, AI. They're like, "well, how do I get me some of that?" Like, that's the that's the bottom line.
[00:27:30] 100%. {Laughter}
[00:27:31] Right? Like, of course you're gonna do that.
[00:27:33] Brian: Yes.
[00:27:34] Jason: You're...It's very easy to imagine how this is gonna have this massive impact on how efficiently I run the business. And it’s not to say I’m a big believer that this necessarily leads to job loss. Right? I think it probably leads to reinvesting in other areas over the long term. The short-term effect—well, economists spend hours pontificating over all this stuff—but I think the bottom line is it’s going to have an effect on productivity. That’s right. And people want access to that. So there’s going to be a massive hunger. A little digression, but I do think one of the more interesting things with go-to-market this past year is that, more than anything, AI broke a bunch of budget cycles. That’s why there was so much—if you’re in enterprise sales, you understand—people are only in the market a small percentage of the time. Out of all the set of people in your TAM, only a fraction are actively in-market at any given time.
[00:28:40] Brian: Right.
[00:28:40] Jason: Wanting to buy. Right? That's right. And what it did is it kind of, like, broke that cycle. And that's why you see so many of these high flying companies that are exploding, because they've got access to to this much larger set of people that want to reimagine their CRM or reimagine the way they're thinking about this thing because people are just they want that productivity.
[00:29:02] Brian: Right. So someone's CEO is looking at it saying, "I'm willing to break my budget cycle, because I know that if I can save even one to 2% when you're talking to the scale of the logos that you have on your wall right here." Yeah. Like, we're we're talking millions, if not more. Hundreds of millions of dollars when you talk about more than 3%.
[00:29:24] Jason: Exactly. And they're all terrified of being out-competed by the kid in his dorm room right now who's developing a solution to compete with them. Right. I mean, that that's a legitimate fear. If I don't figure out how to work in this context, I'm going to get out competed by somebody who does.
[00:29:41] Brian: Right.
[00:29:42] Jason: And and that's just the market doing its thing.
[00:29:45] Phillip: Yeah. I think that's where we're, you know, we see what you're doing. We see the customers that you're serving at the scale that you're serving them. And it seems obvious to me that there's just there's no sense of like, you you have to be out in front and you have to anticipate where that market's going. And to your point earlier, you sometimes you you move in one area and you try to create demand where the market may not be ready for it. 2024 was maybe a year ahead where what where the buying cycle was.
[00:30:22] Jason: Yeah. That's right.
[00:30:22] Phillip: Right? But look where we are today. You know, there are budget line items now for buying AI today. We're gonna see shipium.ai, you think?
[00:30:33] Jason: I don't know if we bought...I think we might have actually bought it, but... We don't have any plans of like re-anchoring on that.
[00:30:41] Brian: So yeah. Well, you have your new AI analytics platform, Orca.
[00:30:45] Jason: Yeah.
[00:30:45] Brian: Which is pretty exciting. Yeah. The ability to like assess data at the scale that you're talking about. Through that platform is gonna change a lot of stuff.
[00:30:54] Phillip: That's exciting.
[00:30:56] Brian: There's couple of other themes I’ve heard from some of your partners and customers, things they’ve said you’ve helped with. Yeah. But I’m just curious to get your reaction to this. One of the things I heard in the session yesterday was the need for speed—speed of reporting, speed of responsiveness. You could almost call it adaptability. Yeah. To just improve like crazy. And at the same time, I heard people talking about being able to predict what’s coming—predictability. So it’s predictive adaptability. Like, where’s...I mean, that's not even...that's not real.
[00:31:40] Phillip: Someone should trademark that.
[00:31:41] Brian: Yeah. I don't know. But like, as you look at this floor and you think about, you know, where things are headed as a whole, it feels like being able to know what's gonna happen. And then if it doesn't happen the way you think it's gonna happen, be able to flip on a dime. Yeah. That's like, that's what's next. What what do you see as like the next steps towards something like in that game?
[00:32:05] Jason: Yeah. I think, I mean, in general—forgetting the AI conversation for a little bit—you’re going to get efficiency from either being able to predict the future better or being able to flexibly adjust to it faster. Yeah. Right? And that’s just…you know, that’s why people—it’s very pronounced in supply chain because it shows up in cost structure. Right? So, if I’m really, really good at predicting the future, then I can order six months in advance, wait for it to be manufactured in a far-off country, wait for it to come, etc., etc., and it’s not a problem because I know I’m going to sell it and it all kind of works through the system. Right? I mean, the other side of that is if you compress these lead times, which we talk about a lot in this world…
[00:32:53] Brian: Yeah.
[00:32:54] Jason: You can create a lot of flexibility because something changes. You're not kind of overinvested in that, you know, where things are gonna go. Right. And so, I think that's the overarching, that's kinda like the framing. And so, I think what ends up happening here though, is kinda both. Right? If you think about these processes as we’re reimagining them, part of it is again getting faster at turning things around. You know, our first thing with our Orca analytics product is oriented toward an app. What used to be a two- or three-day activity is now minutes, where I can explore a space. Right? I want to understand, like, what happened to my fees? Why are these things consistently late? You know, what happened with this dataset? And now it’s minutes. Right? And that lets me quickly iterate to get to a bigger insight. But even on top of that, you’re starting to glue these things together with autonomous workflows. Yeah. Which gets you to the simulation discussion and how do I—
[00:34:03] Brian: Yeah.
[00:34:03] Jason: Try a bunch of things automatically and start detecting even faster. But it'll let you adjust. And I think, you know, mostly prediction. Right? A lot of it ends up being about these regressions or these scenario, etcetera. And so...It's really this really powerful feedback loop around just continually trying things and using that to kind of drive where your stuff is going.
[00:34:29] Brian: I like that experimentation, testing things out, trialing things. It almost reminds me of, like, Nathan Fielder in The Rehearsal. Like, go model it and try it out. Rehearse it. See what's gonna happen. Right?
[00:34:43] Jason: But that was such a–I mean, you know...I talk about the...I keep talking about this. Right? But if it takes, like, weeks to get this answer back, right? You're not gonna be really willing to experiment. Right. You're like, you're only like, I only get, you know, I get three shots and that's it. Sure. Like, and it's really about if you increase that speed, you're now in this position to really start trying lots and lots of experiments and playing with, you know, give yourself your your your playbook just gets much larger. Right? Right. And and that's that's like where we're really at the that's just an example of we're right at the forefront of that.
[00:35:19] Phillip: Yeah.
[00:35:20] Brian: That's so cool. Yeah.
[00:35:22] Phillip: There's a...this is totally, you know, not in our plan. But are you familiar with prediction markets at all? Are you looking at this at all?
[00:35:31] Jason: Yeah, yeah, a little bit.
[00:35:32] Phillip: You know, there's a really funny joke that's floating around as a meme right now. It's like, the future of commerce, if prediction markets have their way, you know, what you do is you open a market that says, “I bet nobody drops off four avocados on my doorstep within the next hour and a half.” Right? And then that’s how deliveries are going to happen. It’s like somebody out there is going to say, “I’ll take that bet.” And that’s how all fulfillment is going to happen. It’s really all probabilistic, you know. That’s how the—
[00:36:07] Jason: The meme of the streaker at the Super Bowl predicted – placed a bet on himself that a streaker would show up at the Super Bowl.
[00:36:12] Phillip: Well, mean, that's–
[00:36:13] Jason: I don't know if that's true or not.
[00:36:14] Brian: It's a funny joke. Probably. Right. Like, I would I believe that's probably true.
[00:36:17] Phillip: There’s a part of me that wonders, you know, taken to its logical end. I mean, it does seem like there might be enterprise applications in predictive markets around these outcomes. You are working in a field that has a lot of room for problems to happen along the chain. There are many hands that touch it, and a lot of systems that have to work together in the right order to ensure a positive customer experience. Predictions don’t always go right, and I think that’s where you start to see, especially in these consumer markets, people hedging to the downside as well. Yeah. I’m really interested to see—that’s part of our predictions, saying the word a lot—but we always look ahead with this outlook for 2026. Last year, we had, you know, up to a 94–95% hit rate. This year, we’re looking a lot at prediction markets because I think it’s really interesting consumer behavior. So I’m really curious, just from your take on it, if that was something on your radar, because you seem like a person who’s really up on these things.
[00:37:38] Jason: Yeah. I mean, I’ve always been a big fan of prediction markets because they’re one mechanism to capture larger sentiment. Yeah. Markets traditionally have always contained information in some form. I mean, the modern stock market, you could argue, is maybe a little perverted at this point because of all the machines doing it and stuff. Right. But the interesting thing to me about some of these poly markets and whatnot is that they reflect why people think one way or another. It’s, honestly, more in the vein of the fact that we don’t do enough AB testing, experimentation, or extraction of information from people—understanding why people behave a certain way and then using that. That is fundamental to data-driven decision-making. Right. Sure. And people are naturally much more inclined toward storytelling as a decision process. But if you can incorporate data into your decisions, even taking yourself from 10% to 30%, you’ll probably get much better outcomes. Right? So I don’t have a definitive answer, but I think it’s very interesting to ask why the poly market thinks this way versus my storytelling version. Why am I getting signal from here? Now you have the tools to process all this and potentially turn it into something useful across all these things, which just wasn’t feasible before.
[00:39:35] Phillip: I mean, there's a 5% chance that Jesus comes back in 2027 right now. So I don't you know?
[00:39:40] Brian: Did you put a bet yet?
[00:39:41] Phillip: No. I'm not. But I was like, "I might. At this point, look at what's happening." It could go up. I mean, there's some money to be made.
[00:39:50] Brian: I think that you're I you're really onto something which is, like, belief systems and how they influence behavior. Especially when you see how other people believe. And so, I mean, we're going into the future here. But, like, as shippers, as I've heard them called here, anyone that's I usually call them brands and retailers, but–
[00:40:12] Jason: Yeah.
[00:40:13] Brian: People who are out there making shipments, start to share data and maybe contribute to group experiments. I wonder how that starts to change the model. Because I do think there will be a world in which there's anonymized data that gets shared across shippers. In fact, that kind of what you're gonna have. Right? Like, you're gonna have the data across multiple shippers, and you're gonna be able to run models against multiple people at one time. Yeah. I mean, at some level, like, kind of like in the poly market and Cao Shi, there's an element of, like, self fulfilling prophecy that might be able to come true when you combine that. Like, you know, everyone has stuff to go out on Black Friday. Yeah. Right? They already all know it. They already know we're gonna all scale up. How is it all gonna get shared and optimized across those vendors?
[00:41:09] Jason: Yeah. And then, you know, it's like these...Especially in this space where you have a large number of carriers or something, the market is inherently inefficient because information is not being served.
[00:41:21] Phillip: Right.
[00:41:21] Jason: And I, you know, there's one argument is...People want it that way. Because it's a classic kind of... If you can obfuscate information, you have leverage in that scenario and power. You do wonder. But as that as that trickles to the system, then potentially the whole thing does get more efficient. Right?
[00:41:43] Brian: It gets more efficient, but then also, like, how do you have an edge in a world like that?
[00:41:50] Phillip: You use Shipium. That's what you do.
[00:41:51] Brian: You use Shipium.
[00:41:54] Jason: I mean, whole, like, trying to find edges in this stuff is, like, such a fascinating... That that's a whole other, you know, discussion about where are the moats in this new world.
[00:42:02] Brian: Right? That's exactly right.
[00:42:04] Jason: You see the software multiples right now Sure. In the public market. Right? You can see Right. There is a like, what people used to think was an edge or a moat Yeah. Is no longer a is no longer considered a moat by investors. It's probably still TBD how much of that is true.
[00:42:20] Brian: Right.
[00:42:20] Jason: But it’s—I mean, I spend a lot of time thinking about that, kind of Shipium’s “right to win.” Sure. What are our differentiators that are actually protective, defensible? Yep. And it’s very much like—I do not think there’s a clean answer out there. If you talk to VCs and push them on it, they don’t have a great answer. I mean, I think data is a good one. Yeah. I think, you know, we believe there’s a lot of private data that we’ve been able to capture over the years. Yep. That helps us significantly. I think there’s both actually having the data, but also modeling that data and understanding how to work with it. And then I think there’s also just a lot of private ecosystems that exist,
[00:43:13] Brian: Yeah.
[00:43:13] Jason: –that you have to kind of figure out your way into, and the network effect of getting into all those things becomes a problem.
[00:43:20] Brian: So trust and relationships.
[00:43:22] Jason: Right. Right.
[00:43:22] Brian: Starts to play a huge role. Because, like, to get into an ecosystem like that, you've gotta you've got to earn the trust of that ecosystem. Yeah. And that's a that is a totally different skill set that I think a lot of the moats are built on right now.
[00:43:36] Jason: Yeah. Yes. That's right. Yeah. But but I think the a lot, you know, it's gonna disrupt a lot of things. And a lot of things that were relatively protected over the next five, ten years are gonna be are gonna be disrupted. And I don't think anyone has a great I don't think there's a great answer. Anyone who tells you they know what's gonna happen in five years is is not, you know, they're full of shit, basically.
[00:43:58] Brian: Yeah. We predicted it for ten years.
[00:44:02] Phillip: No. I know. One year is about all we're is all we're
[00:44:05] Brian: Yeah.
[00:44:05] Phillip: That's good. Comfortable doing. That's good. I'm really looking forward to your talk.
[00:44:09] Jason: Yeah. Oh,
[00:44:09] Phillip: Good. Good seeing you again.
[00:44:10] Jason: Cool. Good to
[00:44:10] Phillip: See you too. Thanks for having us.
[00:44:11] Jason: Thank Appreciate it.
[00:44:13] Brian: Jason, good to see you. Have a great day.
‍

![[STEP BY STEP] The Spend Behind the Scale](https://cdn.prod.website-files.com/5d7da04028ecca2357d6b3b0/69798176f396041bf435ea69_Step%20by%20Step%20Season%2013%20Episode%203%20Square.png)
![[STEP BY STEP] Seizing the Seasonal Opportunity](https://cdn.prod.website-files.com/5d7da04028ecca2357d6b3b0/69795acd11a1b72e87f02b76_Step%20by%20Step%20Season%2013%20Episode%202%20Square.png)
![[STEP BY STEP] Optimizing the Product Promise](https://cdn.prod.website-files.com/5d7da04028ecca2357d6b3b0/697a396d6be9838b97e0e13b_Step%20by%20Step%20Season%2014%20Episode%201%20Square.png)