Sep 302021
 

Last week I wrote about the challenges of moving art between virtual worlds – especially the long-standing dream of moving avatars across wildly different worlds and experiences.

Something I didn’t touch on is whether this is a dream you actually want.

Chasing the wrong dreams

There are a lot of things people assume they want out of a metaverse which don’t really hold up under close scrutiny.

Do you really want to move your avatar between a fantasy world and a gritty noir world set in the Prohibition era? Even if it shatters all immersion when you head into a speakeasy and someone casts a fireball spell at you?

Do you really want to be in a ten thousand person battle with the latest weapons technology if it means you get headshot by a sniper a mile away that you never got to see, dodge, or avoid in any way?

Do you really want to be able to just walk from one player-created world to another, if it means their distasteful content is clearly visible over your fence line?

What if all these features mean you can’t have a world where you’re actually a sailing ship instead of a person? Or controlling a swarm of beings? Or just playing a 2d puzzle game? Or…

You get the idea. It’s quite easy to arrive at poor architecture decisions because you’re chasing technical dreams that don’t hold up to what users actually want. Above all, we should never forget that the metaverse is for people, because all technology is for people.

Virtual worlds are places

An MMO, a multiverse, or the future dream of a metaverse, all have one simple thing in common. They are places. Since inception, online worlds have been about representing areas of virtualized space.

In the very earliest worlds, everything was just text. Players moved between individual rooms, each of which described a particular location. They were linked together in a web just like webpages are with hyperlinks.

a MUD map of London in 1889, illustrating the room-based model

Gradually, we got worlds with graphics. At first, the solution was to just display a picture in each room. But eventually, we were able to move around on a continuous map, first using 2d graphics and later using modern 3d rendering.

It didn’t take long for these graphical worlds to become so intensive and complex that it took a suite of computers to handle the continuous map for just one world – technology we invented to launch Ultima Online exactly 24 years ago this week, back at my first real game industry job.

By definition though, any multiverse (and remember, a metaverse is just a more advanced version of a multiverse) is going to involve many very different places. You don’t want those all to exist on one map. You’d end up with Fairyland butting up against World War II.

Aesthetics isn’t the main reason this is bad. The real issue is that players won’t be happy if they were expecting a nice peaceful tea party with talking flowers, but they took one step too far, and were run over by a Sherman tank.

Different places can offer different experiences, and different experiences is the point of having a multiverse at all. Otherwise, why not just have separate worlds?

Luckily, the older “room” model offers a solution, even though it might seem like a technologically boring one. It’s better to think of these different environments as alternate dimensions, rather than one “reality.” They can be linked via portals just like webpages are hyperlinked. This lets the worlds still fall nicely into the spatial metaphor we’re all comfortable navigating, because it’s the way the web already works.

Maps are maps are maps

This becomes relevant to metaverse aspirations when you realize there isn’t much difference between the maps app you use for navigating real world highways, and the minimap you use to explore a fantasy world.

In both cases, there is a server somewhere holding map data. The map data might be carefully crafted by a game worldbuilder… or it might come from GIS digital elevation map data. It might have landmarks on it like the Dread Wizard’s Tower… or it might have locations marked with Yelp reviews.

This is where applications that seem hugely different today will converge. When you explore Azeroth in World of Warcraft, your client is updating your location on an invented fantasy map. When you scroll around Zillow in your living room, your client is moving your location on a digital real world map. When you use Waze for up to date traffic info, you are physically moving in the real world, and your movement is controlling a virtual “avatar” on the digital map. And when you hunt Pikachu in PokemonGO, your movement is doing the exact same thing, but the map is a clever mix of reality and fantasy.

You could think of it as a little table like this:

 Real mapInvented map
Move only in virtual spaceYelp, Zillow, and pretty much all other apps that annotate real world places.All MMOs.
Move in virtual and real spacePretty much all GPS-enabled apps today.All AR games like PokemonGO

The bottom right box also includes much of what is dreamed about in “AR Cloud,” including videos like this dystopian nightmare.

Today, apps like Yelp or Waze use the typical web stack to deliver their content. But as Cloud AR evolves, it is going to quickly become apparent how much real-time data and interactions matter. The right solution for moving digital avatars that interact on maps in real time is and has always been a virtual world server.

(This, by the way, also means that everything we’ve seen happen in video games – whether it be the corrupted blood plague in World of Warcraft or dupe bugs or flying genitalia – is something we can look forward to happening in our real life neighborhoods someday. If you are interested in a sobering look at those consequences, you might enjoy this lengthy talk I gave a few years ago).

How it looks versus what it is

You may recall that in my last article I pointed out that the lack of standards for art is a big barrier to a fully interoperable, standards-driven metaverse.

We have even fewer standards for maps.

Worse… maps change. A lot. People want to change them. Restaurants close. Buildings are demolished. One of the biggest complaints players have about their virtual worlds today is they are basically cardboard stage sets that cannot evolve.

Virtual world maps today mostly work like the art I talked about last time: they are baked into a static form, and then pre-installed on your hard drive with the client. This makes changing them hard, which is exactly backwards for the single commonest use humanity has always had for any place: to build on it.

It’s actually worse than you think: common practice is to ship the map as art. It is literally a sculpture carefully made by an artist. It isn’t even “map data.” It’s a picture of map data, a sculpture of map data.

There is some sort of fallacious assumption many developers have about virtual worlds, where they seem to think the stuff on the client matters. But it doesn’t. I call it “the goggles fallacy.” We get really caught up in the rendering of it all, the way it looks. This is why maps get shipped as art these days even though we have the cloud.

Look: My car is old enough that it has a built-in navigation system with a DVD of maps. It’s hilariously obsolete! Why do we think it’s OK for our metaverse future to work this way?

A client can display a map – and therefore a virtual world — any way it wants. In fact, it should! It should display it in the most useful way for the user. That means we need to get away from the idea of maps being art. They aren’t. They’re data. They need to come down on the fly. They need to be able to change. To evolve. They need to be maps of alternate dimensions and far-flung realities and of downtown Cleveland.

A funny little secret: virtual world operators have always had clients that offered specialized views of what was going on in the virtual world. Density maps, load maps, debugging views, all sorts of things. That’s the right way to think of it all: the player view is just one of many.

Which brings me to where the rubber really meets the virtual road: If everything is data, then we really need to dig into how that data works, how it is structured, how we inform the client of it, and most crucially how the data can mean things.

But that sounds like a big topic, so I’ll save it for another day.

crossposted from the Playable Worlds site

  One Response to “How Virtual Worlds Work, part two”

  1. Hey there Ralph, Decentraland developer here, I love your work (particularly UO and ATOFFGD) and I’ve been lucky enough to attend to some of your talks in Argentina and in GDC.

    I find it very exciting that you are building a metaverse platform as we are at Decentraland (virtual world loading seamless land-parcel-scenes around the user instead of isolated), funny enough we are also using Typescript for the SDK/Framework in which land owners code their scenes, in our case we are web-first at the moment and our world explorer is a web page with a “Kernel” js component that streams the land/scenes in the loading radius of the player and sets up web-workers (threads) that run the sandboxed typescript code of every scene. The other big component is an embedded WebGL Unity build that works as some kind of Renderer that places stuff in place based on its communication to the Kernel (we also take advantage of some engine features so it’s more than just a Renderer actually).

    Regarding streaming art for the world you are right on it being a pain in the ass, we actually parse GLTF/GLB files in runtime for the 3D models (at very expensive CPU cost, as we don’t have multi-threadding support yet in our WebGL build), those files are also stored/discarded in the browser’s cache. To avoid such high CPU costs we try to have an alternate version of deployed 3D assets and textures in a more performant Unity-friendly way (asset bundles).

    We suffer some of the issues you mentioned regarding decentralization (all the world content is on our distributed content network and the end user experience may be better/worse depending on the quality of the server reached) and also regarding “having neighbors” haha although I find loading a seamless world with contiguous user-generated scenes on the land parcels is a kind of “realistic” discoverability mechanic, like seeing something interesting far away IRL and getting closer to check it out. I believe the best platform-design-wise option for creators versatility would be to be able to define if one’s scene is an isolated world or a seamlessly-loaded parcel and allow both type of scenes, like entering a building with no windows or hanging at a park and see the city around you.

    Many of the problems with decentralization are being tackled with DAOs democratization, although I’m not so sure it’s the final solution nor a golden bullet.

    I too believe that these virtual world platforms we are building are for the people of the future, let’s keep rocking it, best of luck!

Sorry, the comment form is closed at this time.