top of page
  • Concurrents

Slice 101: How To Encode a Slice



Right now, we’re looking at a new way to stream game content that offers something more than video. Rather than streaming images, GPEG prefetches all the game assets needed to play just as you need them. Meshes, textures, animations, particle effects; all the elements needed to experience a game are streamed directly to the game engine running on local hardware. This means the game is playing locally, not in the cloud. GPEG actually gives us the best of both worlds: the ability to get into a game quickly without streaming latency, video compression artifacts, and the access limitations of video-based cloud gaming.


GPEG streams content as sub asset packets that encode the portions of the game assets that become visible as the player moves through the game environment. We define each bounded area that shares a similar visibility horizon as a viewcell, as shown here. Both the viewcells and the sub asset packets are automatically generated for a given game environment from within the game editor one time. Every time the GPEG server software predicts an avatar will pass through a viewcell, the subasset packets will be streamed from the server to the game.


We’ll use the rest of this video to show you how easy it is for a level designer to set up a game for GPEG streaming using an example from the Unreal Engine.


The first step is to define the absolute bounds of the encoded level using the GPEG sky sphere. To this, we drag the ‘GPEG Sky Sphere’ into the level from the “place actors panel” after deleting the original sky sphere. We next define where the game camera can move in the space. From the place actors panel we drag a ‘GPEG Camera Navigation Volume' into the level and resize it to fit the navigable area we want to use.


We also need to define where we want the player’s avatar to be able to travel in the level. From the place actors panel drag a GPEG reachable Actor to the level to define what parts of the level are accessible to players. Next, we set the starting point for the game experience. From the place actors panel, we drag a "GPEG Camera Markup" actor to the level and drop it into the initial viewcell in the game that we want to use.


The next step is to add 4 GPEG actors to the level to track avatar movement in the game.


From the GPEG drop-down menu we select “Add Actor Data” to put that piece in place.


Finally, we need to use the world outliner to select all of the stationary actors. The GPEG plug-in will show all these actors in the details panel. We check the occlusion box for these actors so we can use their locations in our calculations.


With setup complete, we can now export the current level and get ready for GPEG encoding.


Based on the configuration we’ve provided, the GPEG plugin generates the viewcells that fill up the navigable space within a Camera Navigation Volume that we defined. The generated viewcells automatically incorporate all of the collision objects in the level and conveniently define a 3D navigation grid. With this step done, the game is GPEG-encoded and ready to go. The only remaining step is to move the encoded content to a server players can connect to.


After we complete set up, the GPEG game client will use a lightweight API to receive streamed content from the GPEG server that we just encoded. The GPEG game client can be distributed to players through normal distribution channels.


Want to learn more about Slice? https://www.concurrents.com/



57 views0 comments
bottom of page