Discover more from Future Commerce
Episode 29
March 30, 2017

Body Data is the Next Revolution

We sit down with Jon Cilley and Bill O'Farrell from Body Labs to talk about AR, Body Data, the future of Apps and the social economy

<iframe height="52px" width="100%" frameborder="no" scrolling="no" seamless src="https://player.simplecast.com/a571149f-1426-4f64-834a-6f87b0a16e4e?dark=false"></iframe>

this episode sponsored by

Phillip: [00:00:39:29700] Hello and welcome to Future Commerce, the podcast about cutting edge and next generation commerce. I'm Phillip.Brian: [00:00:45:29700] I'm Brian and I today we have...

Phillip: [00:00:48:00] Oh, go ahead.

Brian: [00:00:49:10800] Yeah.

Phillip: [00:00:49:81000] Yeah. I'm so excited that I couldn't even prevent myself from continuing to talk.

Brian: [00:00:55:66600] {laughter} Well, I'm also very excited. We got Body Labs here with us today, a company that we've been talking about for a while. So we've got Jon Cilley and William O'Farrell, or Bill O'Farrell, both on the line with us today and super excited to be talking with these guys. We've been keeping track of them for quite some time and they recently released a product that we got to try out. And really excited to talk to them about the future of body data. So, guys, go ahead and introduce yourselves and maybe give us a little bit of history about how you got into the industry and we'll go from there.

Bill: [00:01:36:11700] Cool. This is Bill O'Farrell, one of the co-founders, and I'm the CEO of the company. And, you know, the technology around which we've built the company has a pretty long history. It was originally developed at Brown University in the group of Michael Black, who's also one of the co-founders. He was a computer vision professor at Brown and now runs the Perceiving Systems Group at the Max Planck Institute in Germany. So he's a director of one of the Max Planck Institutes. And I guess the technology itself has been under development probably since 2006. And originally, in fact, the notion of being able to sort of identify a 3D body was Michael kind of got the idea when he was asked to help solve a crime, use computer vision to solve a crime. It was a crime in Virginia. And there was some videotape surveillance. They couldn't quite figure out the size and shape of the perpetrator. So he sort of gave that to his class as an assignment from this 2D information. What could you tell? And, I think it was, could they fit behind the wheel of a certain car? And that sort of led him to explore this notion of shape and pose and motion. And accordingly, we think we've got really the world's leading technology on being able to provide 3D avatar, fully automatable, based on an idea from sources that range from 2D photos and video, 3D, if you have a scanned or even, you know, things like measurements. So the technology's got a long history. The company started 2013. We'd license the technology from ground of the Max Planck Institute, and we've grown kind of three co-founders to about 40 people today.

Jon: [00:03:35:40500] Hi, this is Jon Cilley. I'm the VP of marketing here at Body Labs. And you know what's interesting about where we are today is in the past, we've been reliant upon a lot of 3D inputs or measurements. In the past, we've done a really great job being able to take kind of a variety of different input types, whether that's a 3D scanner, whether that's a simple measurements or a whole host of other things, and be able to accurately replicate your 3D avatar. What was really exciting moving into the summer of last year is we started taking this new deep learning approach where we could then understand both the 3D pose and 3D shape predicted from something like a conventional image or something like a commoditized 2D sensor. And from that standpoint, it was really exciting to us because as you can imagine, not everybody's got a 3D scanner kicking around at home. And so it was really compelling to be able to increase the total addressable amount of either users, customers, et cetera, or their own customers, with access to conventional imagery or conventional sensors, whether that's through a smartphone, whether that's through photos you've already uploaded on a social media platform. And we asked ourselves, OK, what are a series of things and techniques that we can do to be able to package this capability out where we can get people using and touching Body Labs' technology, but do it in a really compelling and exciting way. And so that's where Mosh, you had alluded to this new product that we had released a couple weeks ago, Mosh, M o s h, which is a new mobile application that we released for IOS, at least for now, specifically empowers you to really touch the Body Labs tech from the same standpoint I just mentioned, using normal imagery of you inside a photo and be able to do something fun and engaging with it, with rendering whether 3D effects or other things on top of that. And so that's kind of where we're at today. Moving from that progress made over the summer of 2016 and now moving into 2017, and we really see a whole host of other applications that this could be really injected inside of, whether that's retail applications, whether that's retail analytics, sizing, all sorts of fun things.

Phillip: [00:06:01:53100] Yeah. Wow. I was going to do the thing that I usually do, which is to sort of make light of the fact that you can tell these guys have a podcast. Right? Because they know how to talk. This is awesome. No, but seriously, thank you so much for coming on the show because we have been talking about this for so long. And I'm sure you've heard my own two cents about this particular area of innovation. And I know Brian's a big fan of this. I'm so excited to talk about what the implications are on for commerce and where you think the consumer value prop is. So thank you both for being on the show. Sorry. Go ahead there, Brian.

Brian: [00:06:44:77400] I was going to say similar things, which is it's super exciting for a company that has such a foundational tech. I think the application for this is almost limitless in terms of what we're gonna be able to do. I think it's going to go into all different types of industries and affect commerce in so many different ways, from fashion design to health care to custom furniture. It's really quite exciting how we're going to be able to use this tech to purchase things. So, yeah, I think maybe you guys could probably tell it better than we can. But you've already developed one product one small application in which people can see themselves in 3D and then really play around it, which I got a chance to try. It was really fun. But maybe talk about some other ways that you're looking to apply the tech and where you see the next steps are.

Bill: [00:07:56:59400] Yeah. So this is Bill, and I think with Mosh, which is as John noted an iPhone app, what we're doing there is we're taking a single photo, and soon we'll be doing this with video, and we're extracting the 3D pose and 3D shape. And again, we'll be doing this with video, so we'll be able to extract the motion relatively soon. So that really is the fundamental informing technology. So what else can one do with that? It's to us in some ways very limitless. And we do have what we perceive as a horizontal technology, meaning if we've extracted your 3D shape, then we can match you with other people of the same shape so you can recommend goods and services to each other. We can match you with clothing that will fit you better. If we can extract  your motion, then we can apply that to your avatar in a video game, perhaps. If we can extract your shape and motion, we can drop you into virtual environments for all kinds of things. Might be an AR environment and might be a VR environment. It might be a virtual shopping world. You can try clothes on you. So there's really sort of no limit to what we think we can do with that. And really, the question for us as a company is just how do you sort of move that into these vertical markets? We know a lot about bodies. We know a lot about shape, motion, and pose. We know less about sort of autonomous driving vehicles or the apparel industry or the video gaming industry. So really, we look to partner with meaningful companies in each of those areas to deploy the early applications.

Jon: [00:09:48:35100] Yeah. And just to add on that, too, we see this world where especially if you can understand pose and motion of what somebody is doing through a viewfinder, whether that's a smartphone camera or something else, you can unlock this whole other realm of body as a controller. Right? To be able to use body related gestures, to be able to control devices in the future. So this is a little bit further out. But Bill had mentioned autonomous driving vehicles, you can imagine if there's a self-driving Uber cab that's approaching you, if you want to be able to flag that cab down or do things like pedestrian tracking, which Bill had alluded to as well, there's a whole host of things that you can do when you view both the body as a controller, but then also when you have shape on top of that... When you look at online retail specifically, a lot of these other industries have been disrupted by eCommerce. But retail or clothing retail has had a harder time being able to get into that eCommerce space in a meaningful way. Part of the challenge has always been that you really can't try anything on or validate that something's going to fit properly. And so if you could easily provide businesses with the 3D shape of that consumer and you could do it in an efficient and an easy manner from something like a web camera or an uploaded photo from an existing user profile, you could unlock this whole other realm of both shape based analytics, size recommendations, in the future you could do things like virtual try-on. And that's actually a combination of both recognizing pose, motion, and shape. So it's really taking kind of the three pillars and focus areas of our technology and uniting them together to be able to do something really special. And so there's all these other verticals that can get easily disrupted. And the one thing I'll add too is, some people, if you were to download Mosh, which I'm sure you guys might provide a link to in the description of this podcast, if you were to download that, many people might ask, well, you know, I've gone on Snapchat, I've gone on some other apps, like whether it's Snow or others where there's face detection and they do all sorts of augmented 3D filters, if you will, on top of my face. But the challenge there is faces were designed to be detected. Think about it. We express and emote through our face. And the way that at a very high level, the way that the technology works is identifying really obvious landmarks on your face and correlating that using similar approaches, AI, deep learning, etc., to then predict where components of your face are. So your mouth, your nose, your eyes, etc. Bodies are a little bit more difficult, as you can imagine, because bodies are in motion, bodies are occluded, meaning you can have parts of your body that are blocked by other parts when you're trying to capture. You can be obscured by clothing. So when you try to do things related to shape, you then need to be able to predict the body shape underneath the clothing so that you have an accurate model. As well as all the motion that's affiliated with those things. So there's all of these really challenging things that you need to compensate for when you're trying to detect full 3D. And all of those things too dictate that it's important to have 3D as opposed to this more 2D approach when you look at conventional face detection on something like a Snapchat application.

Phillip: [00:13:33:27899] My wife might tell you that the whole point of clothing is to occlude her body shape. {laughter} My question is for you, and in particular, I want to hear a little bit about Mosh, because I think it's kind of interesting... It's a very, very, very interesting app in that you've sort of branded it for fun and sort of social and interactive engagement. But I think there's so much more to it. And I think that it's the same thing that we've seen in social networks where there's a lot of data being gathered and sort of creating powerful value props for a consumer. For instance... And I can give a concrete example of this. I don't know that a brand like Chubbies could have existed prior to Facebook because it was really hard to find and market specifically to guys with dad bods. It's kind of hard to... You wouldn't do that through a traditional multi-million dollar national advertising campaign. It'd be hard to get someone to take a chance on something like that. And now we have very, very specific types of data that we can sort of mine. Do you see your technology being able to be applied to other areas where there's already some really great training data like social media profiles or do you see this interaction taking place specifically in Mosh and other applications that are purpose built for this?

Bill: [00:15:06:70200] Whoa, you asked a lot of questions there.

Phillip: [00:15:08:53100] Sorry I tend to do that.

Bill: [00:15:10:67500] Let me try to peel back the onion skins a little bit. So kind of at the highest level Mosh, we feel like, is both an introduction to the world of what's now possible with technology like ours. It's also sort of habituating people to some degree to be comfortable sort of thinking of their body in the digital format. A lot of what we do doesn't necessarily require that someone see their 3D body. If we know the math around your geometry, we can easily recommend clothing without you or anyone else seeing it. But inversely the notion that you've got your body as this digital presence, and you can do things to it or with it or for it is a relatively new phenomenon. So I think at one level, the kind of whimsical part of Mosh is a way of getting people kind of used to it and thinking about a body model in ways that are not really all that intellectual, but just more sort of intuitive. I think that's one thing. I think along that continuum is that, again, we have this really deep understanding of human geometry and motion and pose. And where that becomes valuable is not... I mean, seeing yourself move could could be useful for medical and fitness related applications. But where that really becomes valuable is when you start marrying it with and correlating it to other kinds of data. So just exactly what you were saying. If we know that there are 10000 people out there with exactly the same body geometry as you, then you can sort of say if you choose to, you can sort of share that fact with those 10000 people. And you can begin to look at what they wear, what they use for a tennis racket or set of skis. And you begin to correlate the information that may be found in social networks or just collections of people to begin making some really powerful insights into what you can do with and for your digital body as well as your physical body. Does that make sense?

Phillip: [00:17:33:33300] Yeah, absolutely.

Jon: [00:17:35:18000] Yeah. I mean, build on that, too, with apparel specifically. Let's say, you know, we talk briefly about eCommerce in the apparel space or even just the physical space. If you had that customer's 3D shape. You know what's unique about our our tech in conjunction with everything else we just mentioned is that, to Bill's point, matching bodies together. Our tech empowers really anyone but us to be able to correlate that together. The 3D information we have about each individual person so we can take the micro side... So the individual conversion for that user, helping them, let's say, with trying on recommendations, understand it's going to fit. But then we can also do the macro level where after the fact, if really anybody, but let's just use apparel as an example, you could then understand. Okay, regionally in the New York area, I want to review anybody that has used Body Labs' technology on our site and correlate that with metadata and purchasing behavior that I have about that user or type of user. So I can say, OK. Anybody that bought a small, this is their average body type. And do the measurements that I can then extract from that 3D information correlate with the 3D or the measurement information that I have to design that small, that size small? And are there modifications? Are there changes that I want to do to my size small to address the people that are already shopping there? Or what happens if somebody purchases something and returns it? What was their sizing for that? And based off of their sizing, if I average them together, what did they look like? And maybe I introduced to sizing or modify my design practices based off of the understanding I have there. So there's this balance, too, of not only taking our technology, but using that technology in conjunction and with support of the information and the purchasing behavior they already have. The missing piece has been that they've never had insight into their customers shape. And so that specific piece is going to be really valuable in the inclusion of being able to now do it without the hindrance or the handcuffing of relying on enabling technology and using technology that already is out there in 2D imagery, in commoditized sensors, enables us to really expand that to really everybody.

Brian: [00:20:06:22500] I think, you know, it's interesting, and I'm going to go a little bit off subject, but it's related. We see also some other emerging tech that I think will really play well into your body data. And one of those in particular is 3D printing because you talk about size, but now we're gonna have exact measurements for someone's body and with basically on demand printing or clothes printing or, you know, even some form of bespoke clothing variation. We're gonna be able to create clothes that are a perfect fit all the time. And so as those technologies develop as well, actually, it's gonna start getting more into design and other points about the body as well, like combining someone's complexion with their shape and creating optimal looks for that person. It's not that far off.

Bill: [00:21:20:85500] That's right.

Brian: [00:21:23:00] And also so not only is this going to affect commerce, but again, I'm speaking specifically of fashion, but this is going to affect how design even works. I could almost see design becoming, and I've talked about this show before, but this could be to the point where design can sort of almost have an open source feel to it, because all of a sudden design is something that happens in the virtual world. And people can take files and essentially create their own iterations of those designs, and the way that the whole fashion world works might get a little bit upended. I know I'm kind of pushing the bounds of where we can go. But if this actually works, it could happen very quickly.

Bill: [00:22:20:72000] Yeah. Well, I think we'd like to say that we provide the human body as a digital platform. So if you believe in this world of increasing mass customization where things like 3D printers, computer aided design, robotics... As those begin to coalesce and make their way into the economy and into the manufacturing supply chain, then clearly you need the human body as the platform around which goods and services are designed, manufactured, bought, sold, recommended, etc., etc.. So we're very much, very much of the mind that the world that you just described is coming and it's coming relatively quickly. I mean, it's we'll say in the apparel space, unlike a lot of other vertical industries, really, you still got a supply chain issue there. And democratizing design is probably a little easier because you can actually design through these computer aided design programs. Although there's still some drawbacks because the physics of the actual garment, the apparel, is hard to simulate, but even being able to democratize design and let people adapt and change and make their own designs is one piece. The real challenge, I think, in the apparel space is the supply chain of manufacturing. And now we're beginning to see things like laser cutting fabric, automated sewing machines. Those kinds of advances are beginning to make their way into the supply chain. And if you think about it, the sewing machine hasn't changed much in a couple hundred years.

Phillip: [00:24:05:56700] Right.

Bill: [00:24:06:77400] But now with sort of machine vision and things like that, we're really starting to see automated sewing machines becoming viable. I think it'll be a long time before... Not a long time, but a cognizable time before they get deployed in massive ways just because the expense and the change management's an issue. But clearly, once in the apparel industry, once you solve that supply chain manufacturing issue, then the whole world changes. And we're seeing that in every other industry. You know, it's already happened in large degree. So we very much think, you know, Body Labs provides the human body as a digital platform and enable that mass customized world.

Brian: [00:24:48:81000] It's amazing. So let's talk a little bit about some of the other things you mentioned, just other industries that you can see change coming. I think gaming and virtual communities and even sort of dating sites I might throw in there as well... It's going to be quite interesting.

Jon: [00:25:11:87300] The dating site... Introducing a whole new way to get catfished.

Brian: [00:25:17:63900] Actually the interesting thing about this is, I wonder if you could sort of... I mean, it could actually help prevent catfishing. I think that with body data, you could actually have a verify body type settings...

Phillip: [00:25:38:15300] Oh Lordy... {laughter}

Brian: [00:25:38:15300] I know. I know...

Jon: [00:25:40:2700] You have to be a little cautious with that. But we're right there with you. There're a ton of industries that it could disrupt. However... Yeah. Depending on what information people are willing to relinquish, dating might be one of the more surprising ones.

Bill: [00:25:58:88200] You might be able to establish that that person really is fit and slim as they say. So yeah, I do think that again, providing the body's digital platform, we're really kind of bridging the divide between the virtual and the physical world. We talk about, like Alexa's a great example. You know, you've got a virtual assistant there. If you have some sense that this is a real person and yet there's no physical embodiment, is being able to provide for that application or similar ones, you know, sort of a digital presence to go along with the sort of the virtual services. We think that that's an important component of what we can add to this increasingly kind of overlapping virtual and physical world. You see it in the virtual assistant, certainly trying on apparel. You see it in gaming, being able to drop yourself in. And you want to take Stephen Curry's jump shot, that's imminently doable. And inversely, if you want him to do your jump shot, that's doable. So there's all kinds of places that we see. In many ways sort of much more well prepared for what we deliver than the apparel space. You know, the apparel space is still very much a physical touch and feel kind of place.

Phillip: [00:28:35:13500] What do you think about this particular application being applied in the area of sort of medical sciences for therapy and rehab and maybe for sports training? I know they are already to some degree, but it seems like you guys have sort of cornered the market on a better, like a fundamentally better technology.

Bill: [00:28:57:63900] Yeah, that's actually a really timely question because, you know, being able to do, like motion capture stuff and sort of being able to do like track somebody's gait when they're running or something like that has traditionally been a very costly proposition. You have to put a bunch of markers on different parts of the body.

Phillip: [00:29:20:27000] Yeah.

Bill: [00:29:20:76500] You have this whole set up. We actually can really now effectively do that from a smartphone. Just we can extract... We basically do marker less mocap. So being able to do gait training, you know, sort of track your golf swing, compare to your golf swing to Tiger Woods golf swing. This is all stuff that's pretty much really close to being reality and not in some high end training center, but really kind of with you and your buddy on the golf course. So I think that there's a lot of places there. Physical training. Likewise, you know, if you've had an injury, you can sort of maybe have a video of how you used to run or used to ski or used to kick a soccer ball. And how you're doing it now and compare to see if you can get back to where you were. You know, when it comes to things like plastic surgery. I think we're not quite... I don't necessarily know that you want to do plastic surgery. I mean, we're still sort of millimeters off. Same with prosthetics. There's a bunch of issues about prosthetics, and that really has to be super precise. But a 3D casts... You break your arm and you need a 3D cast printed. You know that's pretty doable. I mean, we could see that happening in the relatively near term. So, again, we find that the breadth of applications and verticals to be pretty wide and just being able to deploy them into each of these verticals really just requires us to have a little bit of industry knowledge that we lean on our partners to provide.

Brian: [00:30:53:27000] Yeah, back to the medical idea. I think also as hardware and more devices that capture other points of data sort of work their way into our homes. I think you're going to see more like virtual health related activity happen. And I know this isn't quite related to your technology specifically, but I think your technology will play into it. But ultimately, I think, as we monitor the body more and we're able to show motion capture... Let's say you dislocate your shoulder. You'll be able to get on to a virtual session with a doctor, show them kind of what's going on and have that captured at the same time. And maybe there'll be some sort of automated insights into what happened. Ultimately, I think doctors will be able to do a lot of pushing rather than having to pull on you for the information. But again, that's down the road a little bit.

Phillip: [00:32:06:66600] It requires like this I Federated Identity Management platform that's, you know, either government controlled or something worse. I don't know, I can't think of anything worse than government control. Provided we're all still around 10 years from now. My fear is that it's gonna require an immense amount of trust in somebody somewhere to safely store and control that information. And maybe us. Maybe that's something, you know, some peer to peer technology blockchain might be a real transformative technology there. But we're a very, very long way away from that. If I can be the voice of reason.

Bill: [00:32:48:74700] Well, I have a question. I mean, how do you feel about Facebook? Do you feel that your personal information there you have enough control over?

Phillip: [00:32:55:73800] No. And it's a very, very valid...

Brian: [00:32:59:8100] You do use it, though, right?

Phillip: [00:33:00:58500] Oh, yeah. No. It's become the political argument platform for me. Facebook is the... That's the only thing...

Bill: [00:33:08:89100] I hope you're not spreading false news.

Phillip: [00:33:13:39600] {laughter}

Jon: [00:33:13:39600] I think the part of the... I think what Bill is hinting at too is it always comes down relative to the benefit. We have this like security versus convenience system that you balance. And so Facebook gives, I'm assuming, more a higher degree of convenience relative to the maybe mistrust you feel individually with the data that is stored on Facebook or other locations?

Bill: [00:33:42:36900] Yeah. I was making that analogy because to me, you know, your body model such as it is... First of all, I think people get a little bit, you know, kind of breathless about it. But it's just a bunch of math. It's just your geometry. You know, it's not like naked pictures of you. But even then, you know, our feeling is it is owned by the consumer. So we imagine a world where everyone will have their own 3D body model as part of their digital online ID. And you can share it as you wish. If you want to just unlock it and let, for example, Amazon use it to fit you for clothing, then buy the clothing and then you lock it back up again. Great. If you want to share it. I don't necessarily mean show it, but I just mean, you know, make other people aware of its dimensions or its shape or whatever it is in order to have groups of people who share your same geometry and be able to recommend goods and services, then you can do that. If you want to actually throw it out there and have it dancing across the digital world. Great. But that's really kind of you as an individual get to decide that. That's our view of it anyway. Certainly where that is stored needs to be a trustworthy site. And that's definitely part of our vision. We would like to be the repository of your online digital body model, but the consumer is the one who should control that and give permissions accordingly.

Jon: [00:35:11:84600] But it's also to add to that, too... Sometimes it's also how you present 3D shape and how you present their avatar. So a good example actually is Mosh to a degree where there is something that's happening that is aiding in this perception of augmented reality. And as we progress to a point where this type of technology will be running in real time on a device, it's a similar approach. So even things like virtual try-on or other things of that nature might be in a position where there is this data and you know that there's this data that's available to you, but sometimes the way that you present it can make people more comfortable with the actual data that we're empowering you to use. And so if we're presenting a model, sometimes you may not be always comfortable with that. But if you submit a photo and then you get a benefit back, or if you see the clothing rendered on a photo of you as opposed to a model of you, it makes it that much more real world. And then the benefits become much more obvious as opposed to any inferred security implications or privacy implications related to your 3D shape.

Brian: [00:36:27:32400] Yeah. And Phillip and I talked about this before. Two comments on this. One, I think, you guys are absolutely right. As soon as consumers realized the utility of this, that will sort of outweigh privacy ultimately, as long as there're secured ways of handling. The second thing is the technology that you guys have is sort of mind blowing in that if there is any public photo of you available online, this data could actually be flagged and a 3D model can be created from that photo. So that is to say, like in some ways, this is very private data that's also extremely public. And that's I think that's going to become, you know, sort of a question in a lot of people's minds, like how they go about representing themselves online. And frankly, I mean, at least for this generation of people, almost everyone has some photo of themselves available for public viewing online. In fact, if you have a headshot on Wikipedia right now, the Blippar app, actually... If you scan someone's face, like a photo of them, it will pull up their Wikipedia page. It's cataloged every single photo on Wikipedia. So people are going to have to start thinking about photos and how they view their data online even more differently than we are now already. And so I think we've gone through a lot of change in the past few years. We're going to go through some even larger mindset switches here in the next five years.

Bill: [00:38:15:48600] No doubt. And I do think, again, I think the body model itself is to some degree less recognizable than a face and probably less threatening to have out there in the wild.

Brian: [00:38:33:61200] For now. {laughter}

Bill: [00:38:35:8100] Yeah, yeah. And but unequivocally, if there's a photo of you on Facebook, you want to sort of run it through our shape extraction, you're done. You got your body model.

Brian: [00:38:49:87300] Totally.

Bill: [00:38:50:54900] So, yeah, no doubt the facility and the ease at which this can be done will only increase.

Phillip: [00:38:57:12600] There's already a ton of information out there that we have no control over. I think I can own that much. The TSA in particular has probably seen more of me than my wife has. So I get that.

Bill: [00:39:10:24300] No doubt. No those are millimeter wave scanners. There're getting you naked.

Phillip: [00:39:15:4500] Yeah, it's pretty close. It's pretty close. I think the problem for me is the disconnect from reality. I'll be honest with you. I don't know how much I want to really know exactly what my measurements are. I'm going to lie to myself. I still have jeans. I've become my dad. I now have jeans that from like 14 years ago that I swear I fit, but I probably don't.

Bill: [00:39:40:30600] Well actually, we can solve your problem. We don't have to tell you what measurements you are. We just have to tell you what fits.

Phillip: [00:39:47:24300] Yeah. Actually, I love that.

Bill: [00:39:49:34200] Yeah. And actually, in fact, just going back to previous a couple of comments ago, one of the big issues is that the apparel industry... A 32 waist? Come on that describes one principal component of your body. We track two hundred and fifty six principal components. Your waist is not your geometry. And you may care what your waist is because of your appearance. But when you're shopping for clothes, whether you're 32, 36, 34 is almost irrelevant because every brand has a different sizing schema, a different fit model. They've got a different sort of a block they use to manufacture the clothes.

Phillip: [00:40:30:55800] Yeah.

Bill: [00:40:32:900] I think the thing that everyone cares best does this clothing fit me? And you never know have you in our world you never know your measurements. Measurements are irrelevant. The only thing that matters is, you know, here's my geometry and here's the stuff that fits. So we're solving your problem.

Phillip: [00:40:45:28800] Yeah. I love that. Please solve my problems. This also sort of makes me think I'm sort of always in the entrepreneurial spirit. But somebody out there, this is a free business idea. Cut me a royalty check, but somebody should just create a clothing brand called Lie to Me, which just sizes everything down five or six sizes.

Jon: [00:41:03:76500] With the accurate sizing?

Bill: [00:41:04:78300] Yeah, because they've done that. It's called double zero.

Brian: [00:41:12:8100] So another thing kind of in line with what we've talked about before that I'm interested in talking about a little bit is, we talked about how you'd be able to share your body data, but also what about body data as an asset? And that is to say, you know, I think as we've got this data that we own that we can share when we want, there's also going to be the opportunity to provide that data to companies for their use. And so if you have the right body data, companies may be interested in using that data, especially, if we get into more full body scanning and like capturing like real image complexion. And honestly, throughout your lifetime, there's going to be opportunities for companies to purchase... Let's just talk about modeling for a minute, but like models' data and then use that even posthumously for advertising their clothing or whatever it is across someone's lifespan. And you'll actually, I'm thinking that there's going to have to be some sort of, and there may already be laws and dates around how much you can sell, how long those rights lasts, whether that be a snapshot in time or over a period of time, and in which ways you're allowed to use that data to represent that person doing with that data...

Phillip: [00:43:01:58500] But doesn't that already sort of exist to some degree? I know there are certain sort of trademarked copy written people. Like Marilyn Monroe is a really good example. You know, her likeness and image can be repurposed for a Tiffany or in license for a Tiffany or Christian Dior ad if they want. But to use even her silhouette it has to be... You know, you have to go through the right channels and the ownership's to do that. And is how does that differ from today?

Jon: [00:43:34:16200] Just to back up on kind of packaging up that data. So, I mean, the nature of our technology obviously is continuing to collect data, digitized synthetic information about individuals and then use that to help improve our model moving forward and things like that. But as you mentioned earlier, I think related to packaging that data, one good example is the plus size industry. And one challenge that you run into as you start going up in size is the larger you maybe identify as, the more distribution the weight can be distributed throughout your body. And that introduces all different types of variability related to clothing, sizing, design, et cetera. And so if you can understand accurately that market and provide better insight into that market to improve design, to improve sizing... All those things. That's one area where we've identified people might be interested to be able to purchase the data itself to help infer other things down the road. But yeah, I mean, related to the privacy thing in repackaging and information done in the future, it's definitely something that we should consider and evaluate moving forward. But I think where it's more interesting to us, at least today, is the data that we already have. How can it aid and improve businesses moving forward to be able to use that to improve their existing technology, clothing, etc.?

Bill: [00:45:06:36000] And an important thing to underscore here is that we use that data in an anonymous aggregate way. So your individual shape is not personally identifiable in any way, shape, or form. But we can sort of tell a sports apparel company, hey, you know, here are three hundred swimmers, and this is the breadth of geometry of the swimmers who may be your consumers. So it's not your Social Security number. It's not your face. It's not you know, there's no personally identifiable information. And it is in the aggregate, basically just zeros and ones. It's a bunch of, you know, kind of geometric proportions.

Brian: [00:45:50:85500] Yeah. I totally understand that. I think the reason why I brought it up is just because especially... Washington, where I'm located, introduced... One of our listeners sent me an article which sort of highlighted that Washington State is starting to really create or they're talking about laws that will limit how you collect that information. And so I think it's really relevant right now. And lawmakers are actually starting to pay attention to it a little bit more.

Bill: [00:46:25:23400] Yeah, yeah. I mean, as well they should. And I you know, again, we try to sort of make the point and we spend a lot of time with our partners, you know, sort of going over, well, you know, the security and privacy. I get it. And I'm a big advocate of privacy. I think it's fundamental. I would only say, again, when it comes down to your 3D geometry, it's not really, as... Shall we say, as personal as it is, it is not, I think, dangerous. It doesn't have the same kind of implications as security number, your bank account information, and credit card information, even your birthday, you know. So it's really just a circle or a square. It's just, you know, it's just a different geometry.

Brian: [00:47:16:35100] Yeah, totally. And I think what I was really excited about what you guys are doing is that, as you mention, you're creating a secure way for people to be able to manage that data and own it and share it as they will, which is I think is going to be absolutely necessary as we're moving forward. There will be companies like yours that allow people to sort of manage how they use their body as a platform.

Bill: [00:47:53:85500] Indeed, indeed. And we're really at the beginning stages of this. So at this time last year, we really were at a point where if you had a 3D sensor on your phone, we could give you a body model. And that was pretty serious friction to adoption. Now, with a photo or a video, we can give you a body model. The need for building up the secure environment in which you can house your body model will grow. Because now people can do it much more easily. And that's definitely very much in our sights.

Phillip: [00:48:29:8100] Can I ask sort of a different question? This is kind of an oddball question, but you got my gears turning. So you sort of evolved the product. It went from, you know, sensor based to sort of just image recognition or extracting data out of images, even still images. What could the next step be? I mean, how much could you potentially learn from the human voice or breathing patterns or...?

Bill: [00:48:59:10800] We're training that model right now via this podcast.

Phillip: [00:49:03:77400] {laughter}

Brian: [00:49:03:77400] {laughter}

Bill: [00:49:03:77400] Just for the record, one of my previous companies was the speech recognition company. So I know a lot more about that than, believe me, you want to know.

Phillip: [00:49:12:88200] Yeah.

Bill: [00:49:14:10800] So we're body folks. Voice is a whole other set of things. And maybe for us, the bigger question is, you know, can we really encompass the full breadth of information and knowledge and data that you want about 3D human body shape? Can we encompass all ethnicities? Can we encompass everything from little tiny babies from infants to geriatrics? Can we encompass sort of the data, the longitudinal data, that goes into aging, to pregnancy, to other things, to growing... Things like that. So that's really where we focus. And we have by far the largest amount of training data that underlies our models compared to any other company in the world or any other institution in the world. But there's still tons more for us to collect and to correlate.

Jon: [00:50:23:67500] Yeah. To add just slightly on that. You know, one thing we mentioned briefly, which is clothing is still relative bottleneck related to predicting shape, pose, as you can imagine. We talked about clothing itself can obscure your body shape. You know, one thing that we're exploring is the ability to be able to really kind of identify the pixels in an image that are clothing and build a model that can compensate for the information that we're detecting as obvious clothing artifacts within the photo, understand where there's a body that's exposed and then take that information with existing shape prediction, pose prediction, etc., and then fuse all that together to make the shape, the pose, and all that fun stuff, motion more accurate moving forward.

Phillip: [00:51:15:31500] That's awesome.

Brian: [00:51:17:2700] So one more question.

Phillip: [00:51:19:68400] We're out of time, Brian. We're out of time. One more question. Quick.

Brian: [00:51:22:43200] Well, I know.

Phillip: [00:51:23:43200] We need to have you guys back at some point. That would be awesome, but carry on, Brian. One last one.

Brian: [00:51:28:82800] Yeah. So I know that that Mosh was, you know, sort of your most recent release. What is next? What's the next step that you can talk about?

Jon: [00:51:41:2700] Yeah. I mean, so I think that the more interesting thing is what capabilities are we going to unlock and how are we going to make Mosh better? But then also just our underlying core technology better to empower all these other verticals, categories, et cetera. You know, one thing I think is pretty apparent, you may have noticed when you're using Mosh is for all of its benefits with detecting pose, et cetera, there is inherent value when you make that Real-Time. And so when you make that real time, there's a lot of other separate requirements that may need to happen in order to be able to do that cost effectively. But then also more efficiently and effectively for the user. And so being able to take something like what we're doing, which is happening via a server. So for Mosh, again, the user uploads a standalone image. We process it really quickly, just the pose of the individual inside that detected photo, deliver it back to the user. And then all the fun 3D graphics are rendered via the device client side. But moving forward with real time, all of that magic will have to happen more than likely on the device. So getting, you know, are convolutional neural networks, CNN, deep learning, whatever you want to call it in all of the other bells and whistles that from our standpoint, from a core tech, that's operating on a server. Getting that efficient enough to be able to run, using something like an iPhone GPU, et cetera, is something we're really heavily exploring. And it's something we're looking to possibly deliver in the very near term. And so those are really exciting things because it'll make Mosh, for this concrete example, to make Mosh significantly better, more engaging, more just better rendered 3D content, all that fun stuff. But it'll support all these other separate initiatives with businesses looking to build off of that technology moving forward.

Brian: [00:53:37:900] That's exciting.

Phillip: [00:53:37:900] That's a great place to end. Well, congratulations on all of your success. We're so glad to have had you on the show. Where can people find your products and where can people find you guys online?

Jon: [00:53:51:900] Yes. So it's bodylabs.com. Common spelling for that. That is our Web site. Mosh.ai is the Web site for the Mosh app. And if you just search Mosh camera in the App Store, it should be the first thing that comes right up. Download it. Give us your feedback. All that fun stuff. We're always looking to make Mosh a little bit better, not only just incrementally, but with a lot of the fun stuff that I mentioned coming down the pike.

Phillip: [00:54:18:65700] And are either of you on Twitter or anywhere where we can point people?

Jon: [00:54:22:86400] I am on Twitter and so feel free to follow me. It's just my name. It's @joncilley. And essentially my handle is that. Luckily I have a unique enough last name where my handle is that pretty much anywhere.

Bill: [00:54:37:81900] So I'm a Luddite. So no.

Phillip: [00:54:40:43200] {laughter}

Brian: [00:54:40:43200] {laughter}

Phillip: [00:54:43:81000] Great place to end it. Brian, why don't you wrap us up.

Brian: [00:54:45:60300] All right. Thanks so much for listening to Future Commerce. If you listen on iTunes, be sure to pop a five star review on there. Leave feedback if you can. And always looking for your comments, feedback, anything you have to say about the show. So you can leave it in the Disqus comments on our site or hit us up on Twitter or LinkedIn or wherever you can find us. And always happy to engage. If you want to get real fun, you can always listen to the show on Alexa, via the phrase "Alexa Future Commerce podcast." With that thank you so much for listening. Keep looking towards the future. Thanks, guys, for coming on the show.

Bill: [00:55:31:40500] Thanks very much.

Jon: [00:55:32:9900] Thank you. Bye guys.

Phillip: [00:55:32:62100] Bye

Recent episodes

LATEST PODCASTS
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.