Cathy Hackl is a futurist, Apple Vision Pro developer, and co-author of the upcoming book, Spatial Computing: An AI-Driven Business Revolution.

Motley Fool host Deidre Woollard caught up with Hackl for a conversation about:

  • How businesses, like Lockheed Martin and Lowe’s, are already using spatial computing.
  • The current challenges developing apps for the Vision Pro.
  • Virtual air rights, digital fashion, and questions about the future of spatial computing.

To catch full episodes of all The Motley Fool’s free podcasts, check out our podcast center. To get started investing, check out our quick-start guide to investing in stocks. A full transcript follows the video.

This video was recorded on Feb. 24, 2024.

Cathy Hackl: Once you start to have these devices that are scanning the physical world in real time, you start to create what I call large vision models. Models of the physical world that understand our world and are updated almost in real time. That’s where it starts to get really interesting and a little scary, from a privacy perspective, I’ll be honest.

Mary Long: I’m Mary Long and that’s Cathy Hackl, a futurist, author, and an Apple Vision Pro developer. Deidre Woollard caught up with Hackl for a conversation about spatial computing. They discuss how this technology could change meetings, manufacturing, and remodeling a kitchen, what the beginning of mobile can teach us about this computing revolution, and the difference between the technology and value in the Apple Vision Pro.

Deidre Woollard: Let’s talk about spatial computing. This is all about the Vision Pro. Apple wants us to believe this is the future of how we interact with technology. What’s your take?

Cathy Hackl: I want to start by saying that I think we need to take a step back, we’re going to talk about the hype, of course, but not frame that conversation spatial computing around just one single device, and this is why. Because when you start to think about spatial computing, I think a lot of people are thinking of it in the context of only the Apple Vision Pro or some people might use it interchangeably with the word mixed reality. I am of a different mindset. I don’t think that this is the same. Just like the internet is not the same thing as our mobile phone, I don’t think special computing is the same as one single device or one single technology.

What I mean by that is that spatial computing is an evolving form of computing that is 3D-centric and that uses technologies like AI, computer vision, mixed reality, and other technologies to blend virtual experiences in someone’s experience of the physical world into a human’s experience of the physical world. That being said, through spatial computing, we’re also allowing technology, computers, robots, devices, hardware to start to navigate the physical world with us. It’s a combination of a lot of new technologies and about human-to-human communication, but also human-to-computer interaction and it is not about one single thing.

I would even go as far as saying this is a new field of technology and it is as revolutionary as mobile computing has been for all of us. Spatial computing I think people need to take a step back and realize what we’re talking about here is the future of computing, the future of how we as humans will interact with technology in new ways, and not just one device or one single technology or even the metaverse and I know we’re going to get into that.

Deidre Woollard: I have to ask you, have you played with or used the Vision Pro and how does that relate to this overarching theory of spatial computing?

Cathy Hackl: Definitely have used the device. I am personally an Apple Vision Pro developer, so I actually had a chance to try the device before many people did. I just couldn’t speak about it publicly. Obviously, lots of NDAs and stuff, but now everything’s out in the open. I had done things with it before. Obviously, when it came out to the public, I went and bought my personal unit and everything. I’ve definitely tried the device. I think the device is magical. I think a lot of people, when they first tried the device, they have this “aha” moment of awe of seeing technology in a new way in most of the people that I have noticed doing the reviews or doing their demos.

I definitely have seen that throughout my career in technology when people try these new technologies. That being said, I think the device, and I’ve said this before, I think there is $3,500 worth of technology in the device. As of yet, I don’t think there’s $3,500 worth of value to the regular consumer. There is value there for developers like myself. Maybe for enterprises not yet, but because it’s really early. It is a fantastic product. I think it is doing a lot of great things. It is extremely powerful, and another thing I think a lot of people don’t realize is how much artificial intelligence is being used in this device. It’s a version 1. It is the right product at the right time, to be honest, but it’s not a mass-market product yet.

Deidre Woollard: I love that you made that distinction between the value of it and the value to consumers. I’m curious, though, since you were a developer on this, how does developing for this vary with other experiences you’ve had before?

Cathy Hackl: This one is a little bit different because, obviously, it’s really new hardware that very few people had access to. Even the development process, you have to have a computer that has at least an M2 chip, so M3, M2. I think that in itself prices out maybe some developers from developing for this. If you want to develop using Unity, you have to have a Unity license. There’s certain things I think right now that might make it hard for maybe a young kid in their dorm to develop for this. They could still develop using some of the native, let’s say AR Kit or some of the native Apple part of the Apple ecosystem. But if you want to develop something really robust, you’re going to need certain capabilities. So I would say that.

I will say, though, because the device is so amazing and truly blends that physical world and virtual experience together, the possibilities of what we can develop are truly mind-blowing. As a developer, it is both an exciting challenge and an opportunity to start to create experiences that have never been created, that no one’s really thought of, and that take full advantage of that value that is in the device, in the technology within the device. I think once we have more developers creating amazing content, we’ll get to that point where there is going to be more value for the consumer, for the mass market. We’re just not there yet with that part, but that’s where the work of someone like myself, it’s facial dynamics, comes into play. We need to create this content, we need to push the limits of the hardware and the technology and truly create mind-bending experiences.

Deidre Woollard: Fantastic. Well, I’m curious about that because thinking about when the iPhone came out and there was not a lot of development. You had a few cool things you could do, but it was really two years later that ecosystem developed. Do you see a similar timeline happening here?

Cathy Hackl: A hundred percent. This is version 1 and I would say this is like the beginning of mobile. It’s really early. First of all, there might be a lot of apps available for the Vision Pro. That doesn’t mean that discovering them is easy or that finding them is easy within the Vision Pro ecosystem when you’re wearing the device, and that doesn’t mean that all these apps are using the full capability of what the device can do in spatial computing. I want to be very clear about that. It is early. We’re all testing and learning, even with the companies that I’m already working with, the early adopters. It’s about exploring the potential of what this can become. It is about what are the early learnings that you can start to take when you’re creating these new apps and new experiences.

Because if you look at spatial computing from the perspective that it is the evolution of computing and that it is what comes after mobile computing and you start to think about it’s not about just the device, but it’s about every surface or the physical world around you becoming a spatial interface, then it starts to get interesting. Another thing I’ve noticed with the companies I’m working with is that they’re also thinking long-term, and I tell them this. When you start to think about an iPhone 16, an iPhone 17, a Vision Pro 2, a Vision Pro 3 and other hardware that the ecosystem is going to throw at us, then it starts to get more interesting. Especially when you start to think about iPhones with more spatial capabilities, more spatial video, maybe some other things you might be able to do with the phone that you couldn’t do before, that’s where I think the companies and the mass market and more brands are going to be like, OK, so now maybe it’s something we should pay attention to.

That’s when you start to not be only about 400,000 headsets being sold, but about millions or billions of people with these devices at hand that it starts to have these facial capabilities. A lot of us are playing the long game here. That long game is moving really fast, though. So I do have to say that. When people asked me, do I put it at 5-10 years? I can’t tell you. I don’t think anyone really can tell you. I would have to have a vision into every one of these companies’ road maps to truly tell you is it five years from now, is it 10 years from now? I think we’ll have to see and there’s a lot of different parts that have to fall into place. This is the other part, which I think is interesting and people forget, is spatial computing has four components. It has the hardware, which obviously we as tech business people, we love overindexing on hardware, taking it apart, talking about all this stuff.

There’s the software. The software is where there’s going to be a lot of work that needs to be done to create this content and create value and experiences. The connectivity. When you start to think about all these devices, if everyone’s going to end up wearing glasses and all these devices connecting on edge and the types of connectivity we’re going to need. Another level of not even 5G, we’re going to need to take 6G and whatever comes next. Then there’s also the part of the data, all the information that these devices need to operate because they’re literally scanning the physical world almost in real time and then all the data they are going to produce. That obviously has a lot of implications. Long answer, but that’s what I think right now.

Deidre Woollard: [laughs] Well, as someone who invests in data center REITs, I’m like, “OK, I see something happening here that I’m interested in.” I also want to talk about so many people focus with spatial computing on the consumer aspect like games and videos and things like that, but there are a lot of business applications here, too. I feel like people are ignoring a little bit of that part. But with spatial computing, thinking about for me from the real estate aspect, you have tremendous capability for business, digital twins, building information modeling. Talk to us a little bit about some of the business applications.

Cathy Hackl: Yeah. I think that, obviously, a lot of people want to focus on consumer because that’s a little bit more exciting to talk about, shopping and all that. But I think from the business perspective, there’s already use cases that have astonishing results. I’ll give you an example, Lockheed Martin, for example, used the Microsoft HoloLens, which is in essence a special computing device, and they achieved a 93% reduction in costs on the creation of one part of the manufacturing process for the Orion space vehicle. That’s a 93% reduction in cost because they use spatial computing. So there’s already a few use cases out there that show you that this technology can be extremely powerful. Spatial computing plus AI, you’re starting to talk about revolutionizing a lot of different processes, whether it is manufacturing, whether it is real estate.

With this technology, you can actually put the device on and show someone in the physical world what that is going to look like, what that building is going to look like, where the exits are going to be. You’re going to be able to walk through a virtual building really, not in virtual reality, but in the physical world, seeing everything around you. That to me is extremely powerful. It’s also going to allow you to tour a place that you might not be able to go. You can do that in virtual reality, and there’s the Zillow app already in the Vision Pro.

But right now it’s more like 360 photos that you’re walking around, so it’s not that sense of presence just yet. But when we start to think about that, I think real estate from decorating your house, and Lowe’s is already doing that in their Vision Pro with the kitchen. They have a kitchen demo where you can decorate your demo. But actually seeing what something could look like in the physical world I think is really exciting, or understanding from a construction perspective what the physics of the space could do to the design, the physical design of that house.

I think there’s a lot that’s about to be impacted and because of the devices, but because the physical world becomes a spatial interface, the physical world becomes where we see technology. We no longer have to see it in these little rectangles that we carry around with us or on our computers, it starts to be all around us. So I think that that opens up opportunities for real estate, for finance, for education, manufacturing, of course, HR, you name it. Just like mobile computing has impacted most industries, I believe spatial computing as the evolution of computing, of future compute, will impact almost every industry.

Deidre Woollard: Well, I want to talk a little bit about this co-presence idea because everybody wants a better meeting experience. When Meta first debuted some of the meeting experiences with the guys with the legs, everybody loved to make fun of that. But it seems like we are getting to a place where maybe we’re getting closer to that. I was looking at what [Alphabet‘s] Google is doing with Project Starline, it feels like we’re getting closer to actually making these remote meetings feel a little more like we’re really together.

Cathy Hackl: I will tell you, I had a demo of Project Starline last year at the Code Conference, and I was blown away. You’re sitting in front of someone, you’re not wearing a device. It’s done through AI and cameras and you’re seeing someone through a screen, but the screen is showing you a version of them in 3D, and I was so impressed by the fidelity. I could see someone’s pores more than I would normally see them in person. I was like, “Wow, this is crazy.” Or I remember the person I was doing the demo with had an apple and I literally just wanted to grab the apple from them because they moved the apple forward like they were giving me the apple, of course, I couldn’t grab it, but those sorts of things. Something that really stuck with me in the presence part was the pores, but also I was wearing this ring, and you could see the detail on the ring. He could see the detail on the ring in such a way that was even better than eyesight. I don’t know how to explain it.

Deidre Woollard: Wow.

Cathy Hackl: The detail was so clear that he said, OK, this could be really powerful, from a presence perspective. Right now, for example, with the Vision Pro, you can use your personas, which is like a 3D representation of you. It’s starting to get a little better. The first couple of weeks very uncanny. It’s still very uncanny, but it’s starting to get a little bit better. I think they’re starting to improve that. I think we’ll start to get a little bit more high fidelity, let’s say, of how we look in these devices. But the idea of presence, I think that’s where it starts to get interesting. That’s where it starts to also become something of value and of interest to the mass market because one of the most powerful things I’ve been able to do on my device is look at the video I took during the holidays with my parents. My dad is turning 80 this year. He’s getting older. He’s very healthy.

But I was like, I’m just going to shoot as much as I can on my iPhone so I can have these facial videos of him reading Llama Llama Red Pajama to my kids, stuff like that that’s special. When I put the device on and I can see these experiences in 3D, in spatial video, it’s powerful. It’s powerful and I’m like, is this the future of the family photo? Is this the future of how we might retain some of those memories? So that’s where that idea of presence in real time but presence also after the fact I think could be very powerful and could become one of the reasons that someone like my dad might eventually, in a couple of years, get one of these devices. Just like back in the early mobile days, my dad wasn’t on Facebook, [laughs] but he got on Facebook because that’s where he could see pictures of his grandbabies. I think it’s almost similar right now. But that idea of presence, I think, is going to be huge for human communications, both in the personal side but also the business side.

Deidre Woollard: Well, you had mentioned AI earlier and I’m curious about how much you think, as spatial computing evolves, how much of it is going to be about co-presence, looking at memories, person-to-person communication versus person-to-AI communication and collaboration? Does this get us deeper into that? Because I feel like we went through the last year of generative AI, and Copilot, things like that. People are getting more integrated with AI. It seems like spatial computing takes that to a whole another level.

Cathy Hackl: A hundred percent in the sense that anyone that’s working with a ChatGPT, or a Copilot, or Claude, or whatever it is that you’re using from that perspective, we’re all engaging with it on our phones or our computers. That’s where you’re engaging with AI. So when you start to look at all these AI companies, they’re all looking for that container, what they call a container, of where they’re going to put their AI technology, and that comes in the form of hardware. Whether it is the R1 Rabbit that took over CES, whether it’s the Humane AI Pin that a lot of people are curious about, whether it’s the Vision Pro or whatever OpenAI and Jony Ive decided to create and bring into the world, it’s all containers for how we engage with AI. I think that that’s where this interesting conversation starts to happen, it’s about engaging with technology in new ways.

The other part here that is an important part of the conversation, I think a lot of people are not having this conversation or understanding it, is that right now a lot of the generative AI that we’re talking about is large language models trained on language. Even when you look at something like Sora or Runway, it’s trained on video, but it’s video of things that have already happened or photos of things that have already been created or happened. Once you start to have these devices that are scanning the physical world in real time, you start to create what I call large vision models, models of the physical world that understand our world and are updated almost in real time.

That’s where it starts to get really interesting and a little scary, from a privacy perspective, I’ll be honest. But I see the evolution of what comes after large language models as large vision models. Models that truly start to become world builders, that truly have understanding of the physical world. We’re not there yet. I don’t want people to get too excited, it’s the beginning of it. But these devices are going to start to create these models of the world. They’re going to start to understand our world so that we can engage with AI in totally new ways, in ways that are going to feel seamless. There is I think this evolution. I think a lot of people are not understanding what these devices can do when they’re reading the physical world around us.

Deidre Woollard: I think that’s really fascinating because it makes me think of the difference that happened when we had mobile devices and all of a sudden you could see where everybody was, but you’re just seeing that point, you’re seeing a dot. Now you’re seeing all of this information, and the computers are all processing and layering all that information. There seems like so many applications for that.

Cathy Hackl: We’re accessing that data layer. The data layer is, we can’t see it as humans, but there’s all this data around us. But once you have these devices that we can use to see through, glasses hopefully, like very nice glasses in the future, and they’re reading the data that we can’t really see, you start to access this new data layer. That’s when I talk about virtual experiences in someone’s experience of the physical world. That’s where it starts to get interesting. You start to see all this data, not on your phone and a little pin, but actually in front of you in the way of an annotation, a hologram, or whatever it is that you’re going to be accessing. Yeah, it starts to get really interesting really fast, and that’s where compute power, processing power, where connectivity at levels we’ve never seen where edge, cloud, that’s where it starts to get interesting when talking about spatial computing in the perspective of AI chips or cloud computing, all these sorts of things that need to be in place for us to be able to truly access that data layer in the physical world.

Deidre Woollard: Interesting. It is. I’m just going so many places. You mentioned that you have children, I’m assuming youngish children, and I feel like for young people, their experience of the physical self and the virtual self is much different than for me growing up because they’re growing up playing in Fortnite, in Roblox, and they’re dressing their avatars, they’re thinking about that. As we look to a spatial computing future, it seems like there’s a a consciousness that exists in two places. Do you see that kind of thing happening?

Cathy Hackl: A hundred percent. My three kids are all Generation Alpha, and I’ve been doing a lot of research and work on Generation Alpha. The kids of Millennials being born in 2010 and still being born. So Gen Alpha is in the process of growing up, let’s say. I always say this, to them, what happens in the virtual space is equally real. My kids don’t make a differentiation between the real world and the virtual world. It’s just different parts of the world. It’s a spectrum to them. I always say this, if they fight with a friend in Fortnite and they see them at school the next day, they’re still going to be mad. [laughs] It doesn’t change because it happened in Fortnite.

It’s still an interpersonal relationship. So these spaces are social spaces. They fluctuate between them, it doesn’t make a difference for them. They just flow between them. It’s very natural for them. Yeah, that is a difference I think for other mindsets, Millennials and up probably. It’s harder for us to wrap our head around, but I see it with them day in and day out. That’s when you start to think about special computing and gaming or even fashion. It unlocks a new world of self-expression, it unlocks new opportunities for them to socialize in new ways. For this younger generation, Gen Z and Gen Alpha mostly, because Gen Alpha is really who’s grown up with this being ubiquitous, it’s truly changing the way they interact.

Deidre Woollard: Well, I think the fashion question is interesting because I know you’ve invested in some virtual fashion. Now it’s like Fortnite skins and things like that. But it seems like it could go in whole new places and really be valuable to people.

Cathy Hackl: Yeah, I think it’s about to unlock a new era for fashion and luxury and retail, in the sense that when you start to have a way for you to see fashion in new ways, where I’m going to be able to walk around and if you’re wearing your glasses, you’re going to be able to see the outfit I want you to see on me, or I might be able to change my outfit depending on if I’m going into this store right now or this office for this meeting. Maybe I’m meeting my friends for coffee, but then I’m going to have a job interview. Virtually, I might be able to change the way I look.

I think that there’s going to be some interesting things there. You already have, for example, London Fashion Week, a partnership they have with a company called SYKY, who, I’m a big fan of and I’m a holder of their NFT, and we can talk about NFTs for sure. But I’m a member of their community. I think they’re starting to unlock fashion in a mixed-reality sense where there’s a phygital, I don’t like that word too much, but there’s a physical expression of fashion and there’s a virtual expression of fashion, and those expressions are starting to collide and get closer together.

Definitely a lot of the conversations I’m having are with fashion brands that most of them are at the forefront of pushing these technologies. They’ve experimented with augmented reality in the AR try-ons for a long time and now they’re like, “What comes next? What does this mean?” If this is already proving to be a way for us to create sales and create engagement with our community, what does this mean when you start to unlock this? Yeah, I think we’re about to see fashion and technology coming closer together than they have ever been.

Deidre Woollard: Let’s talk a little bit about NFTs as we wrap up because we had the NFT hype, we had the metaverse hype. Now I’m hearing maybe metaverse real estate suddenly is becoming valuable again. So you’ve been an integral part of all of these movements, now you’re looking back on what came before and what is happening now. It feels to me like there’s a way that what happened before sets up what happens now. But what are you seeing?

Cathy Hackl: There was definitely obviously a lot of hype over a lot of these NFTs, speculative assets. Let’s be honest, they were speculative assets. I mean, I got into a lot of these as a speculative asset. I’m not in most of them anymore, to be honest. I’ve divested myself from that. But I do think it did prime the market and the business world to understand that you can still own virtual things. That there is something that you could own virtually, it’s in the code. I think it did change the mindset a little bit in that sense. It was overhyped and everything. When it comes to real estate, I don’t know if I would go back and invest in some of the virtual real estate.

I don’t know if I would personally do that again. It looks interesting. It would have to be a really good value proposition for me to get back in there. We’ll have to see. Where I think it starts to get really interesting in that perspective is virtual air rights, and stay with me here for a second. When everything you can see and everything you can hear becomes real estate, then that’s where you start to figure out, OK, well, should people be able to own this?

Should a company be able to own this? What I see and what I can hear, can they own the space around me? Can they own the space on top of my house? Can they own where I live in the virtual space? So I think that that’s where you start to get really interesting conversations as far as who owns the virtual layer that is on top of the physical world that you’re able to see through spatial computing. A lot of the conversations around virtual real estate that were had I think are going to be very valid when you start to think about virtual air rights. I can’t remember which country it is right now.

There’s a country in the Caribbean that already has certain laws related to virtual air rights. You own the virtual air on top of your house up until a certain height. I think we’re going to start to have these really, it keeps me up at night, doesn’t keep most people up, it keeps me up at night because I really start to think about this becomes real estate. It’s not just a floating billboard in Times Square that you’re seeing, it starts to become everything around you. As I said, exciting for some things, a little nerve-wracking in other senses. So I think some of the conversations around virtual ownership start to become really relevant, and the role of blockchain when it comes to what you might be seeing and if it’s real, if it’s an authorized thing, it starts to seep into this conversation of the future spatial computing.

Deidre Woollard: Wow. Well, thank you so much for your time today. This is so fascinating. Where can people keep up with you because you’re going to a million places?

Cathy Hackl: I am. Definitely on LinkedIn, Cathy Hackl, H-A-C-K-L. That’s where I share a lot of this content. If people want to reach out to me, they can also do that by email at [email protected] and I have a new book coming out pretty soon, it’s called Spatial Computing: An AI-Driven Business Revolution. They can find it at spatialcomputingbook.com. I like to share a lot of information and valuable content on where I see this going, so happy to connect with anyone.

Deidre Woollard: Fantastic, thank you so much.

Mary Long: As always, people on the program may have interest in the stocks they talk about and The Motley Fool may have formal recommendations for or against, so don’t buy or sell stocks based solely on what you hear. I’m Mary Long. Thanks for listening. We’ll see you tomorrow.

Source link