Announcer: Today on Building the Open Metaverse…
Morgan McGuire: And how do you make the world not dense? We’re pretty good at that as a field, within any one area can have, essentially, more polygons than pixels now. But how do I get the scope of doing cities and countries and planets? And that’s everything from floating point precision breaking down, to putting the world on non-planer surfaces, so you actually have a globe. And then dealing with physics where it has that scale where the domains don’t have discreet boundaries. Right? So simulating a river is within video game tech today, simulating an ocean is not. We only do a little strip right next to you, and there’s no boundary if you simulate a whole planet, so that as the metaverse grows without bounds, is all about scaling.
Announcer: Welcome to Building the Open Metaverse, where technology experts discuss how the community is building the open metaverse together. Hosted by Patrick Cozzi from Cesium, and Marc Petit, from Epic Games.
Marc Petit: Welcome to our show, Building the Open Metaverse podcast, where technologies share their insight on how the community is building the metaverse together. I’m Marc Petit from Epic Games, and my co-host is Patrick Cozzi from Cesium. Patrick?
Patrick Cozzi: Thanks Marc. Hello, everyone. We have a real treat today, with us is Morgan McGuire, the chief scientist at Roblox, leading research there. He’s also an adjunct professor at the University of Waterloo and McGill University. Morgan, I was thinking about, I’ve followed you for many years and all the contributions you’ve made to the field. This is our seventh episode of the podcast, and it’s the first time I brought a prop. I actually brought with me, on the subway, my copy of Computer Graphics: Principles and Practices. And you signed this, along with five other of the authors at SIGGRAPH 2013. So, very nice book.
I also noticed that, I’m on page 910 of it, apparently. That’s where my bookmark is. But before Roblox, Morgan has made so many contributions to the field and has helped so many people like myself get into the field with the Journal of Computer Graphics Techniques, all your work with the I3D conference, all the graphics codex. Your teaching at Williams College, and then everything, you’ve stayed so pragmatic too, with all the work you’ve done with game jams and posting everything too. I think I’ve read every tweet that you’ve ever posted. With that said, Morgan, we’d love to kick off and just hear more about your journey to the metaverse.
Morgan McGuire: Bonjour. Hello. Thank you guys. It’s such an honor to be here with both of you. I think it’s a mutual admiration society with Patrick and Marc. And also, all the companies that we’ve each worked at. I’m a huge fan. I think one of the great things about working in, originally, the entertainment industry, and now I think we all see the technology we’re building as bigger than entertainment. Obviously, with Cesium, Patrick’s work has gone there before us. But now with building the metaverse, and really all the ways that 3D technology can enable more than just games is really exciting. I think one of the great things about working in this field is, I work here and, I bet you do as well, because I want to use this stuff. I want to live in a future where there’s cool VR and there’s neat 3D stuff, and everything is beautiful and has awesome interactions. And I can do things that I can’t do in the real world.
And I want to participate in that as a consumer, as a player, as a user. So it’s great that we can be a part of helping to create that. But I’ve never lost being a fan of all the other folks who are doing such amazing things, and just being in awe and inspired every day by their work. Thank you, that was a really nice introduction, and I’m really honored you remember all those things I’ve done and worked on. And, right back at you, Patrick. I love everything you do, I love everything Marc does. And just honored to be here with you both.
Marc Petit: Well, thank you. And you’re right, it’s a fun community. And after many, many years, it’s interesting to see that our collective work can make a big difference in our lives, in our kids’ life. And I think we take this responsibility very, very seriously. And it’s just one of the thrust of having Patrick and I do this podcast about openness, and making sure we create a metaverse that is open, something that we care a lot about. And I know you do too. A nice segue into talking about, so you lead research at Roblox, right? You’re the guy looking down the road, so we’re going to ask you a lot about your predictions on how we get down the road.
And one of the first topics that I remember we touched very rapidly back in July when you did SIGGRAPH Birds of a Feather session was about the necessity to increase concurrency and get more people to interact on a single server. What is your take on this? Roblox, one of the most advanced companies in that base, you have the background in Nvidia as well. What are the bottlenecks and how do we get to create that level of concurrency, and making sure we have lots of people that can come together to share and experience?
Morgan McGuire: Thanks for that question, Marc. It’s definitely the biggest thing on my personal research agenda, and very close to my heart, and always has been, is the scaling problem. At Roblox we have three research groups, there’s an applied data science group, which is similar to what other companies that have large multiplayer games, or social media for live ops for monitoring our network for health. But not the real time aspect of that, but the analysis after the fact. And so really understanding how to optimize our platform. Then there’s our user research group, which is all about design and human computer interaction. And then we have this new fundamental research group, which I coordinate all three of these, but then I personally lead the fundamental research group, and that’s more academic style.
So similar to what we see from Adobe research, Nvidia research. But in places like that, I really admire where closely tied to product, but also very forward looking and open with the community. So we’re looking to publish a lot, to engage with colleagues at other companies, collaborators in academia, participate in academic conferences. Because it’s really about not the product today, but the next generation of the product. And within that fundamental research group, it itself has three different areas we work on. And you identified absolutely the first and most important to me, which is that notion of scaling number of players per server, for instance. The number of people you interact with. The other two areas that we’re working on, which I’d love to come back to later, are natural language, everything about communication and moderation of voice and translation, and everything about user content creation.
So, how do we make it so people can create things, put them out into the metaverse, and then be rewarded for their creations as well? As an important part of that, that we’ve all looked at with things like asset marketplaces and app stores. So scaling communication and creativity. And on the scaling side that you specifically asked about, there’s a comment that Patrick actually made this summer. It’s actually the same meeting that we were all in at SIGGRAPH. And Patrick made a joke, but it is completely true for all of us. He said, “Everybody’s talking about the metaverse today, and that’s in the popular press. I’ve been building this my whole life, and maybe I didn’t know I was building it my whole life, but I now do.”
And it’s definitely the case for me that the early 80s and 90s science fiction influences, definitely, that was what I wanted to build my whole life. I mean, that was it. That’s what brought me to computer science was exactly the stuff we’re doing now. True believer from day one, and virtual reality in all of its forms, and that kind of bringing the world together through digital communication. But I also completely agree with the, I knew I wanted to build it, but I didn’t know I was building it yet. I felt like I was working on adjacent fields. I felt like I was working on video games, I was working on server scaling. I was working on all these things. And then it was only in the last two or three years that I think we collectively realized, on the technology side, the world is ready for this.
And I think the pandemic accelerated that, but the forces were coming before that. 20 years ago the world wasn’t really… People didn’t understand, why would you want video conferencing even, let alone, why would you want full 3D interaction and simulation? And I think the world, users have gotten more sophisticated. We are all used to carrying phones, to having instant video conferencing, to being able to manipulate with multi-touch. And a lot of people are now getting experienced in various kinds of 3D games. Everyone thinks of first person shooters when you say 3D games. But I think what’s been really democratizing is the explosive growth across the indie community, across consoles, others, of non-first person shooter games. That was the tech point that created this was Quake and games like that. That were sort of, how do you bring flight simulator tech to PC? When at the time all you could do was static environments, and you run around and cast rays at each other.
But I think now enough people have experienced all of the different kinds of interactions, whether it’s things like Minecraft, that you can do in 3D. That now the world is ready for that technology. They’re ready to be connected. They want more connectivity. And they want something better than what voice and text and video conferencing are giving them. So that’s there. And then I think in the industry we realize, oh, we’re kind of there because we’ve gotten the scale. We can now do really large worlds. We can put a lot of players in worlds. It’s pretty normal for shipping AAA games to have 100 players on a server instance now. 10 years ago, that was not possible, the networking infrastructure wasn’t there. And with 5G and things coming as that rolls out, even the mobile infrastructure is getting good enough.
The main bottlenecks are everything about that N squared aspect, especially if you do voice. You have this, every player needs to know about every player. But when we have custom avatars, when we have custom animations for remotes, when we have a voice stream that isn’t mixed globally, like voice chat has historically been in games, but is actually 3D embedded. And so everybody’s hearing a slightly different mix. You somehow have to solve these N squared problems, and 3D graphics and simulation, we have all these methods for solving those. We build BSP trees and KD trees, and grids and stuff. And now it’s time to bring those to bear on these problems of, for a school, for a virtual university, for augmented reality where I’m walking around a real city, for a music concert, it suddenly makes sense to have more than 100 players, right?
If they’re all just doing a competition, right? So shooters or sports, whatever, 100 makes sense as a limit. But if you have all the players in the audience watching the competition, and they’re in the same environment, suddenly tens of thousands make sense. Or if I’m in a city and the city really has five million people. If I want to lay the virtual twin in the metaverse on top of that, I need a million people in that server instance. They’re not on shards. We need to manage that scale. So that’s the scaling, I think, is first everything around the player around the player N-squared aspect, of making everyone real and interacting, not NPC crowds. And then secondarily there’s all the stuff, which Patrick is far more expert than I am, in how do you make the world not dense, we’re pretty good at that as a field within any one area I can have essentially more polygons than pixels now. But how do I get the scope of doing cities and countries and planets? And that’s everything from floating point precision breaking down to putting the world on non-planar surfaces, so you actually have a globe. And then dealing with physics where it has that scale where the domains don’t have discrete boundaries. So, simulating a river is sort of within video game tech today, simulating an ocean is not. We only do a little strip right next to you, and there’s no boundary if you simulate a whole planet. So that as the metaverse grows without bounds, all about scaling.
Marc Petit: Yeah. It’s fascinating, but how do you think we go after this? Do we just wait and Moore’s Law is going to fix it for us? Are there any domains of research that you think are particularly promising?
Morgan McGuire: So, with a few exceptions Moore’s Law is not solving anything for anybody these days, right? So, when we refer to Moore’s Law, we’re referring to Gordon Moore’s economic observation that at Intel, every two years he could sell twice as many transistors. So that there would be demand for computation scaling at that rate. And it turned out that if there’s demand for something, the tech industry is pretty good at meeting that demand. And so Intel and then many, many others, and obviously Intel and NVIDIA are the huge chip manufacturers now as well, they found that, yeah, if people want to buy twice as many transistors every two years we’re happy to give them those.
So, the problem is Moore’s Law was great for CPU-style single threaded out of order execution, which essentially is benchmarks and nothing else today. My web browser doesn’t get out of bed without gigabytes of memory and a thousand threads or something. On one hand it’s not my favorite feature of most software today that it needs those kinds of resources. I’m sort of old school demo scene hacker-style programmer, and it seems like 64K should be more than enough for anything still, probably 4K if you’re really serious. But in practice it’s all about parallelism, and especially everything in our space.
So, all of the simulation, all of the graphics, all of the audio mixing and not having anything block on resources, so talking to the network. Networking is no longer I send the state of the game across my local network 60 times a second. It’s we’re talking to multiple content delivery networks, you’re doing clever things as you go through cell towers and base stations. You’re routing in complicated ways. In many cases we need to have these communications be encrypted either to prevent cheating or protect privacy of the players. So, all of that is many, many threaded cues just for the networking, same for the graphics. We have a CPU and a GPU and we don’t want them to stall on each other, so everything is huge buffers and threads.
So, I think Moore’s Law in the classic sense of my CPU will get more transistors and higher clock speed and therefore I don’t have to change my algorithms. I think that’s been dead for probably 10 years completely. However, Moore’s Law in terms of the economic observation that roughly every two years, I think we’re actually doing better than this in the last couple years, the amount of computation I can get at sort of a fixed price is doubling. That seems really strong, and it’s largely because of parallelism, especially in GPUs, and now it’s kind of hit the limit, how many transistors you can pack into a single package. So, I think we’re starting to see multi-package devices, sort of bringing back multiprocessor. Multi-threading kind of killed the multiprocessor for a while, but now we’re seeing I want more cores than you can fit in a single chip. And so multiprocessor, I think, is back.
And in a data center that’s the entire world. It’s everything is an application that runs across thousands of processors and multiple nodes with very high speed network interconnects. So, I think absolutely we’re going to get the scaling. But the difference, and this is what changed 10 years ago, when Moore’s Law was scaling as the CPU gets faster on a single thread, I don’t change my algorithm. There was still great algorithm development going on obviously well into the 2000s and today, but the Moore’s Law speed up, I got that for free on top of my algorithm. And I was trying to get an extra speed up on top of that by changing the algorithm.
Today, if you don’t change your algorithm your program will not run any faster on newer hardware in many cases. And it may run slower because if the newer hardware has more cores that are clocked slower and you’re not taking advantage of those cores, that’s going to be a problem. And also it’s not about money today. The raw materials are free even though it costs a lot to develop them. So, you pay a lot for your processor of course. But power management, whether it’s mobile and it’s on battery, you’re trying to avoid thermal throttling or get longer life or it’s data center where you have to physically pump all that heat out of the data center and provide the energy or just the general concern of we want all of our computations to be as eco-friendly as possible and sustainable, so we want to be efficient. So, that’s the challenge.
And so the question I think becomes within a fixed or shrinking power envelope, how do I get twice the performance every two years or better? And that’s 100% you have to co-evolve software and hardware. So, you have to change your algorithms completely every couple years. And we’ve seen this with all the companies in the industry doing innovation on that space. For me the server side is the big one because that one is going to be the heart of the metaverse as all the data sitting on the server, and we have to get super efficient about that as a field.
Patrick Cozzi: So, Morgan, this is super interesting because I think twice now on the podcast we’ve had this topic of hybrid architectures come up and what do you run on the server, what do you run on the client, and how does that maybe change over time. So, I’d love to hear your perspective on that.
Morgan McGuire: I think it’s definitely changing a lot over time. So, the server side from a developer’s perspective is wonderful. You control the server. You can roll out homogeneous hardware everywhere. You know exactly when you’re going to upgrade it and you can tie everything to that. So, the server side, we’re very confident about at Roblox, and we’re doing a lot of interesting stuff going forward in that space. It’s sort of a centerpiece of our research agenda.
The client-side is tricky, and that’s been really interesting watching how the product side of the company deals with that. For Roblox it’s been really important to provide accessibility to the metaverse. And accessibility comes in many forms. It means the application’s in your language. It’s available in your region and your country. It means that the application can map to different devices. So, if you have either different physical abilities or you’re doing different tasks, VR … I could go out and get a Quest headset or whatever. I don’t really want to wear that on a city bus, that’s kind of antisocial and it’s probably not even safe to be wearing a VR headset as you’re hurtling around the city. So, accessibility in that context means showing up on your phone with the same … Not the same experience, but with an opportunity to participate in the same VR space with people who are on different devices. So, some are in VR, some are in console, some are on desktop, some are on phone.
And phones, especially the mid-range phones, it’s the most interesting end because there’s such a diversity of devices and they radically vary by country and by ability. The phone is ultimately also limited by not just its processor but its RAM and its network connectivity. So, spamming all of those client devices is really challenging, and it’s only going to get more challenging because the old ones don’t go away. In fact, many of our users at Roblox are people who are on a hand-me-down phone. They’re a teenager or a child, and so they’re using their parents’ previous gen phone or the phone that they bought used. And we want to support those people and give them the best experience we can. And so we can’t say, “You need this year’s iPhone in order to use our platform.” We have to make sure there’s some way to map the experience appropriately for them. And that’s really challenging because sometimes there are bugs in the drivers and they’re never going to get fixed. They have really limited capabilities.
Something a friend of mine who’s now at Roblox as well, Angelo Pesci, said to me that it’s completely changed my mind about everything in the metaverse from the digital side, was he said, “We’re not rendering the metaverse.” What he sees his team doing at Roblox is giving you a visualization of the metaverse. You see this in movies, right? In the Matrix when they have the famous green text scroll, that’s one of many visualizations. And the ultimate visualization in that movie series is, of course, going in in the sort of full-body immersive VR. But the notion that somebody can call in to the metaverse, that they’re not even in 3D, that they’re making a phone call, or that somebody is seeing kind of a data dump that they know how to interpret, and that’s the green scrolling text, that’s really important.
When you look at the problem that way, not that we’re trying to render the metaverse on a mid-range hand-me-down Android phone somewhere, but that the problem is when I say, “Given the capabilities of that device, what’s the best window we can create of a visualization of that experience, the metaverse?” That’s kind of tractable then because you’re not asking the phone to do anything it couldn’t have done 10 years ago. You’re just feeding it new data from the cloud.
I think it’s the case that, and this is sort of implied in your question, if you’re on a lower end device, you might be getting a lot more server-side processing doing the heavy lifting. The extreme end that some companies have gone to, which I think is a really interesting space, is it might be 100%. Your phone might be playing back video that’s being dynamically rendered. And of course there’s all kind of latency and interactivity and bandwidth reasons why that might not be the best solution everywhere. But that’s a really interesting extreme.
And the other extreme is maybe you have this monster desktop with the … You have a valve index and] GPUs in it. And in that case the server might be doing very little for you. You might be completely duplicating all of the 3D simulation, everything, to give you the absolute lowest latency because that’s important for the experience, and you have that hardware available to you.
So, there’s big balance, but the heterogeneity of clients, I think, is really interesting and it’s a design challenge.
Morgan McGuire: Clients, I think is really interesting. And it’s a design challenge to figure out how do you work with that kind of diversity?
Marc Petit: Yeah, and we know it’s about to get worse with wearables, right? Where we have a hierarchy of wearables, phones, and edge, and then the big scales. So do you think our software architectures are ready to handle that 3, 4, 5 tier compute environment?
Morgan McGuire: Yeah, anybody’s software architecture today is ready to handle, especially the AR through head mounted displays, because that combines. It has even tighter latency constrains than VR.
Because in VR, you can’t see the real world. So there’s all kinds of things where, if things are misregistered, they can affect your senses. And there’s your visual cortex, and all of your sense of balance, and your inner ear. But you don’t have a reference point in VR, beyond gravity.
And so you can get away with things sliding a little. In AR, the magic is gone if the virtual character slips inside of a wall, because I’m on a bicycle and they’re bouncing up and down. It has to be perfectly registered to the real world. And so you have to have super, super tight latency requirements on just the display aspect.
And then obviously the scalability, that becomes huge. Because you need everything from the real world. Real world has to be, not necessarily replicated, but the virtual world has to be aware of it, to avoid things, getting inclusions incorrect and get the interactions to be meaningful.
So I don’t think anyone’s infrastructure today can handle that. That’s why we have a research department at Roblox, is to make sure that when those are widespread devices, and maybe that’ll be two years from now, which I would love, and maybe it’ll be 10 or 15 years, which I’m just happy that’s still hopefully in my lifetime. I think we’re all investing in figuring out how do we scale up to get to that?
We only just crossed the threshold of this infinite visual detail is okay. And we’re just starting to edge up on the notion of infinite scope of geometry, so getting planet scale stuff. And no one in the field is near getting 10,000, 50,000 players on a single server instance in real time. There are games like Eve Online that can do it, but then they run at one frame per second when that’s happening.
So we want to do that at… Life starts at 60 and goes up very quick, especially in the wearables range. So doing it 120 or 240 Hertz is kind of where we need to be.
So I like those orders magnitude challenges. But I think no one’s quite ready for that. I think we’re all doing a really good job. There’s a lot of good tech for desktop and phones today. And as we scale up number of players, that’ll be stressed. And the new displays, I think, will give us a really exciting challenge that I’m looking forward to.
Marc Petit: So let’s come back on communication. What we like about the metaverse is social first, as we do stuff with the people we like, hopefully. And mostly with the people we like. And so you’re starting to talk about natural language processing. And so we are all talking to our machines now. It took me some time. I’m old class, but I accept to talk to the machines in my home now.
So where is this going as well? From your perspective, how do we make a much richer experience using natural language processing?
Morgan McGuire: It’s interesting that you mention this following the discussion on wearables, because I think that’s what’s really going to ultimately push it, is as soon as you have… So if you have a head mounted display, and there’s some really interesting ones. Like Bose, I think had an audio only one. So it was a wearable display, and it knew where it was in 3D, but sidestepped some of the visual stuff.
And there’s been lots of amazing work. Every year at SIGGRAPH, you see all these cool emerging technology wearable displays. And there’s a couple that you can go out and buy them on the market today, for the see-through AR experience. There’s no mouse, there’s no keyboard, there’s no touch screen. The only interface that makes sense is voice and gesture at that point.
So definitely, that’s going to push, I think, really hard on that humans communicating with machines part. And at that point, it kind of has to work, because you don’t even want to have… When you talk to your phone, it pops up a confirmation dialogue. Or even when I’m using voice activation in my car, it wants a verbal affirmation that it got things right.
And I don’t think that makes sense to have every single… When that’s your only interaction, you’re not just saying, “Play music.” But you’re checking your calendar, you’re scheduling things, maybe you’re programming by describing things with national language. It doesn’t make sense at that point to have it ask for confirmation on everything. So it has to be all voice.
So that’ll be really interesting. I think that augmented reality will force the issue on human and machine natural language communication. For human to human communication, which is foremost where, in our research lab at Roblox, where we’re working today on the natural language problems, I think in some sense, it’s an easier problem, because you don’t have to solve the cognition.
Machines can translate and process audio that they don’t understand. So I don’t think anyone believes that the… No one would consider that the translation services, the natural language text translation services we use today to be intelligent in the sense of generalized intelligence and cognition. So they can translate extremely well from English to French, for example. And they have no idea what they’re translating, of course. It’s just they know how to do the process. And largely those are supervised learning systems, which to give another shout out to Marc and I in Canada, Canada had this great supervised learning training set of all official documents are in both English and French.
And so as a result, it’s a perfect mapping from which you can train systems. So all the natural language translation was initially trained on that. And then it went out to supervised learning on other corpuses from other languages. But that was kind of a neat thing. And I think it’s one of the reasons that the English to French is particularly good, and French to English for machine translation.
It’s not intelligent though, it’s not understanding. And I think we would like to be able to do that on voice. I think we can do that. I think that’s a really important part for the metaverse, is to be able to connect people who are speaking different natural languages, but doing it by voice, not forcing them into a text interface.
So today on Roblox, we machine translate a lot of things. So if you put a sign in a world, so you’re creating an experience, you put up a little street sign or some instructions. We will automatically translate that in the product, into all of the languages that it’s deployed in. And so that’s kind of neat.
Developers don’t have to be particularly sophisticated about localization on their own like you would for a AAA game. And the system can do heavy lifting. And you can imagine text chat, you could pipe that through Google translate or something. In text translation, you don’t really care about the latency, because the text takes a while to type in, and a while to read. And it costs almost nothing to transfer.
And so as a result, it’s okay if you spend a second translating text. No one’s going to really mind that. When you get to voice, you lose the interactivity component, and it’s more expensive to compress and transmit. So you’ve already lost time off the top to the network, to the latency of transmitting it.
And so what we’re targeting currently is, we want to, within 250 milliseconds, we have research projects, to try and do natural language translation. And this is probably not, this is not something that we’ll magically solve in the next year, or two, or three even.
I think it’ll be decades, probably, until we get real time natural language translation with voice fonts. So the translation could be in your voice, or it could be in some assumed voice of your avatar, that has moderation on top of that so that we can avoid certain kinds of abuse or toxic actions. That’s a very, very long term problem.
But in our lab, we’re trying to do the first steps of that, because that notion of safety and translation internationalization, that’s been core to Roblox’s vision of the metaverse.
And so as we move into voice, and that’s an area where the emerging tech in that is not as strong as it is for text. There’s some cool results in research from universities, but it runs for 10 seconds of an offline process. It’s not at all real time.
So that’s, for the natural language side, where we’re focusing. And we hope that within the next few years, we’ll be able to make some pretty big contributions to the state of the art for everyone in natural language voice translation.
Marc Petit: That’s fascinating. And it seems important. I guess I did not anticipate that there would be such a complex problem. So hopefully you guys figure something out there. And so you mentioned the third kind of aspect, and I do agree that the metaverse is all about moving from an era when we like and we watch, to an era when we partake and we create, and we have agency over content.
And empowering creators is one of, I think, a common theme that we see across many companies right now. So how do you envision this at Roblox?
Morgan McGuire: Yeah, so thank you for asking about the creativity. And I think that gets also to the heart of openness, which is something that the three of us really care about, and obviously this podcast.
I was recently reflecting with a bunch of friends about, just within my lifetime, watching how technology and digital content creation became more accessible. And with an eye towards this building blocks in the metaverse space, I was thinking about, I had an internship in high school. I was really fortunate. I grew up in the town where IBM research had two of its labs.
And so I had an internship in high school there. And so I was one of the first people from outside of classic academia who got to experience pre web. And then the web, the original Netscape Navigator, and all of that technology.
And that was huge, of democratizing content creation, that anybody could just make text, and images, and documents. And it wasn’t, and outside the creation aspect of creation, it was the whole authoring package of you could distribute it. Suddenly, I could make a document that anybody in the world could see. And it costs me almost nothing.
That was for the first time, that people had that level of global scope from your bedroom as a content creator. And I’m actually, I’m really sorry to see that the way the web has evolved, the web is great today. I use it for everything.
But things like GeoCities and MySpace, that we kind of make fun of now, as big communities that were a little hokey, and their design sense was. But that to me was so special. And I miss that, that there were these large communities of people who found each other online, who were just, they’re really into knitting, and whatever. And you can put that stuff out.
And that’s largely gone from the web, that kind of folk art aspect of the web is gone. And then when app stores came, with Apple’s App Store, but now with-
Morgan McGuire: And then when app stores came with Apples app store, but now we’ve VR app stores and multiple video game app stores and desktop apps arguably maybe too many app stores for most people. But from the perspective of small developers medium and all the way down to indie, that was again, it was a way that you could create interactive content and you could sell it on a global scale. So the web, let me put content out there, but there was no way to monetize it. There was no way to reward the content creator and there was no quality control or security or gatekeeping, which for an application you really want, you want someone to have least vetted that this thing isn’t going to destroy your phone when you download it. And so the app store is really for individual professionals.
So not for students, not for hobbyists, because the barrier to entry is a little too high for the app stores today, but it suddenly meant a lot more people could create content and share it now be rewarded. And I’m also really into this niche scene that I don’t know if you guys are into, have you used fantasy consoles like PICO-8?
Marc Petit: Are you familiar with this?
Morgan McGuire: So fantasy, this is the coolest thing. And I only found out about this few years ago. A fantasy console, so you know how we have emulators for retro hardware, right? So my original SNES gave up the ghost a long time ago, but you could get an emulator and then if you had a scan of the ROM from when you did actually own the device, you could play it on a modern PC. So it’s a PC emulating an older piece of hardware, which is easy for it because it’s a lot more powerful. So the fantasy consoles, I believe originated with Joseph White who goes by Lexaloffle, and he created this thing called PICO-8, which is he made an emulator for an eight bit retro hardware where there was no hardware.
He just made the emulator and he made a whole IDE that ran on top of the emulator. So you go into this 128 by 128 window and you have 16 colors and the entire IDE, you code, you draw everything within this 128 by 128 pixel world. And you can make these games and it’s programmed in Lua. And so between having a scripting language, providing you all the dev tools and there’s a way to publish games from within it. So then it’s like a modern console app store, but anybody can participate, but it runs in this emulator. So it’s safe to download things because they’re in a sandbox.
That really democratized interactive content creation for 2d. I mean, you can hack 3d and old school ways people do, but it’s primarily a Sprite based 2d thing. And then from that, there’s a bunch of other consoles that came that weren’t a 128 by 128, a little more modern. So TIC-80 is one of them. There’s a project I run with a bunch of friends called Quad Play, which is a more modern runs in a web browser version of this thing. And those fantasy consoles solve the deployment problem. They really meant anybody could get in, you could make a video game, you could make space invaders inspired thing and share it with the world. Most of them, there’s a little bit through itch.io, but most of them don’t have a good way to monetize. So that was a step backwards from the commercial app stores.
And then Metaverse is version one, which is 2003 through now, there was Club Penguin, Roblox shipped around the same time, Second Life. So all those things showed up early 2000’s when they all shipped. And most of those haven’t scaled well. So Roblox is the only one standing, but it wasn’t the first mover. There were a lot of people in this space doing a lot of cool things. And I think we’ve all learned from that. And, and now there’s a second wave coming of Roblox is now increasing its scaled dramatically, but also other companies are now entering that space and we’ve all learned from each other.
And that to me is really democratizing 3d content creation. That notion that anybody can make an avatar, or maybe you can’t do 3d modeling, but you can paint a texture and put it on a shirt and you can sell that, that we can have some notion of protecting uniqueness so that you can do limited addition so that you can be guaranteed that no one else is stealing the content you’ve produced so that you can do derivative works in an interesting how do I get a royalty if you use my texture on this outfit that you put out. I don’t want to block you, but I want my cut from that. So that’s all super interesting and then getting them all to interoperate with the whole tools ecosystem. So how do I make it so that I don’t have to use, maybe I want to be on one company’s asset store, but I don’t want to use that company’s tools. I have some other tools I like, so how do I create in one tool and then export?
This has been a classic problem in 3d, but it was only for professionals. It wasn’t it a problem until recently because no one was creating 3d content except professionals. And so they could solve the interop problem by having engineers go off and spend a year writing an exporter, an importer, or adjusting the whole engine. Now that we’re starting to see everyone making content, whether it’s something as simple as an emote or a voice sample, or as complicated as an entire 3d interactive world, that interop problem is going to be an issue. And so I think there’s a lot of interesting ways to start looking at that. It’s early days, there aren’t, that many tools you use, there aren’t that many Metaverse platforms, but just moving in that direction is super interesting.
And I think the first one is making it so you can connect any tool to any asset store is the first step. And as we were talking before we started the podcast recording, Roblox has since announced we’ve open cloud. And what we did is we made it so that the APIs that Studio are 3D IDE, that it uses were releasing them all publicly. And so essentially Studio becomes not the only way to create content for Roblox, which traditionally had been, but merely a reference implementation of one way. And you can make a plugin or you can write a Python script or whatever you want to directly access the Roblox asset cloud, and upload and download content. And that to me is a really exciting first step to the notion of putting more and more tools together.
Marc Petit: Are you guys looking at supporting open format, like GFTF or USD or any of those?
Morgan McGuire: Yeah, that’s a great question. So currently we do a lot of our import in FBX, which prior to USD, so USD was used internally by Pixar for a while. It only became public in the last couple years, and then it only became really actively supported and embraced very recently. So Autodesk had a proprietary format called FBX.
Marc Petit: And which we know very well.
Morgan McGuire: Yeah, I had a complicated relationship with FBX. So it did certain things really well and then there were other things where it either didn’t meet my needs or it wasn’t open enough for me to be able to extend it, to meet my needs. And so we’ve been using that because that interacts well with the existing tools from the professional ecosystem. But I’m personally very excited about the potential to look at USD as either an alternative or replacement or whatever.
Those aren’t my decisions to make. I’m just a researcher, but in the research side, I definitely use USD a lot and that’s a great way to get interop with a huge number of tools and increasingly a bunch of asset stores starting to move in that direction. So I think there’s some limitations to USD that’s deployed us within any format. And so I think those are exciting areas where I’d love to engage on fixing some of those limitations and see how we can help out, because it’s a very exciting format. I think there’s a lot to be done on that space.
Patrick Cozzi: So Morgan, it’s been super inspiring and fun to have you on the show today. And we’d love to wrap up by asking all of our guests, if there’s any person or organization they’d like to give a shout out to?
Morgan McGuire: Oh man, that’s like when you didn’t expect to win the Grammy and then you’re on stage and, and you have a thousand people. So, I mean, obviously Roblox was built by, I think we have 1200 people right now and there’ve been many others who’ve come and gone in the past. And so all the great stuff at Roblox is that’s being done today is a voice for the work of many others. I want to make that clear, that where that innovation has come from and is going to continue to come from is not me, but me helping others with those great ideas. But I just want to thank you, Patrick, and Marc. And of course, we have to always give a shout out to Eric Haines who introduced us many, many, years ago.
Marc Petit: Yes.
Morgan McGuire: And all the other folks in the field who are just, it’s the students, it’s the people who are writing the textbooks, the people who are posting all that great content on Twitter, there’s a few of us who are fortunate to be at the stages of our career the three of us where we get to channel some of that innovation directly into products. And we’re all friends and get to go and give these talks. But really, I feel like the Metaverse is we are setting the stage for the people who are really going to make the Metaverse, who are going to make that next level. And it’s that 12 year old, bedroom coder, 3D artist today, or the undergrad in their first graphics class who’s asking us questions on Twitter. So I don’t really want to give a shout to all my so-called famous friends. I want to give a shout out to all those people who engage with us publicly. That’s where I get new ideas. And that’s what gives me so much hope for the future of the Metaverse. So thank you.
Marc Petit: Well, thank you, Morgan. Very nicely done. I think it’s great to see that shout out and the new generation. They’re going to expect, they will want the Metaverse to work. I mean, they’re born with so much more expectations than we were. So, fantastic.
Well, I want to thank you so much for your time. You’ve been very generous. It’s fantastic to have you with us. We like to, we treated this podcast has to have the people who are actually doing it, talk about it to their fellow builders and pioneers. And it was fantastic. So I want to thank our audience. We hear more and more people are listening and enjoying the podcast. And I think with this episode, it’s going to be another high points for us. Thank you, Morgan. Thank you everybody. Thank you, Patrick. And as we send those circumstances, if you like it, subscribe, rate, review. Talk about it. Tell us what you think. Tell us how we can do better. Thank you everybody. Thank you, Morgan. Bye. Bye.
Credit: Source link