Implementing a local multiplayer game is mostly trivial and usually much easier than implementing computer controlled entities. Therefore most of the earliest video games were multiplayer games. But screen-space as well as the number of usable controllers is very limited for local multiplayer games – and those are of course just some of the problems. Consequently multiplayer games started using computer networks which sadly poses lots and lots of problems.
Early networked games used peer-to-peer algorithms where each client is treated equally and no explicit server exists. The game loop is modified so that each client sends out its input data (which are usually direct input controller commands for action games or higher level commands for strategy games) to all other clients and when all clients received all input data from all other clients each client finishes calculating the current step in the game loop as usual. Peer-to-peer lockstep networking is easy to implement and needs little network bandwidth but it is a very fragile algorithm.
A game has to be completely deterministic when it is networked this way. Otherwise different clients would calculate slightly different values and the game would be completely out of sync in seconds. But determinism is achievable in a computer simulation. Randomness is rarely truly random in a computer program and just making sure the same randomization algorithm is used all the time it is sufficient to remember the random seed that is used to initialize this algorithm. Floating point calculations though have to be used carefully. Different compilers apply different optimizations (which can usually be enabled) which can alter the results slightly and different CPUs might also calculate floating point values slightly differently. Also one and the same CPU can have different modes for floating point accuracy which might be changed by random applications or libraries running in the background.
Taking care of determinism does sadly not solve all reliability problems for peer-to-peer game networking. A peer-to-peer game depends on every client and if one client hangs the complete hangs for every client. This also maximizes latency because the latency always equals the latency of the slowest connected client. It is also difficult to support players joining an already running game which would require some kind of additional syncing implementation or a complete rerun of all previously issued game commands.
Today peer-to-peer is rarely used for action games because the latency-maximizing nature of the algorithm makes it hardly usable on very large networks (like the internet). But it’s still used for strategy games which do not so much depend on low latencies and can even use simple tricks like playing animations and sound effects which do not influence the game logic immediately to hide latencies.
A client/server model is a much more robust alternative. In this model game commands are send to a single server that is in control of the game. The server does not distribute those commands but instead runs all game simulations itself and then sends out the resulting game state to all clients. As the game is only simulated on a single machine it is mostly as robust as a single player game. Clients cannot negatively affect other clients and can also drop in and out at any time easily because the complete game state is transferred for every frame. Consequently the size of the game state is of primary concern. It can be minimized by calculating all purely cosmetic effects on the clients (like typical particle effects) and only transferring game state that is currently visible for a client but for example more complex physics can easily become a big problem when they are relevant for the actual gameplay.
The clients in the client/server model are very dumb and do not really run a game – they just interpolate the game state which they receive from the server. But this exactly is what makes it robust. Even cheating is very hard with the client/server model.
But lag is still a problem. Lag is as fast as the client-server connection for every connection but waiting for server messages even to apply basic movements of the client’s own character makes action games hardly playable. For that reason the pure client/server model is basically outdated.
Client side prediction
Today’s games use a modified version of the client/server model where each client simulates the game, too. However the server is still the boss of the game, the local simulations are just predictions of what the server is likely to calculate. When the exact same game runs deterministically on the client the predictions will be exactly correct for all situations which are not influenced by other players – consequently the movement of a player himself is mostly predicted correctly and input response can thus feel as fast as in a single player game.
But of course predictions will fail regularly for the behavior of other players. In these situations the game state sent by the data has to override any local predictions. Miss predictions can be concealed by blending from the predicted state to the corrected state for objects which are in view. To correctly apply corrections the game has to regard the time it takes for data to arrive from the server. Server data will always be outdated when it arrives at the client and has to be compared to local predictions from the same time. Based on the corrections new data for the current time can be predicted which is then used to interpolate to a corrected current game state.
All those predictions and interpolations can cause weird situations where for example an enemy is hit visually but isn’t hit according to the server. There is no one right solution to this problem and consequently it will always be important to minimize the time it takes to transfer data between client and server.
All of today’s commonly used networks are based on the internet protocol. The internet protocol is a package based network protocol which kind of works like a postal service. It is not a reliable protocol, packages are not guaranteed to arrive at their destination or to arrive in a specific order. TCP is a protocol that is commonly used on top of IP to provide reliable, direct connections between computers. TCP makes it possible to exchange directly and easily exchange data streams – a fundamental feature for most networked applications. To provide a reliable connection TCP makes sure every single packages arrives at its destination in the exact order it was sent. To achieve this TCP has to retransmit lost packages and can only pass on the latest data when all previous packages have been received. This can create long delays which should be avoided in multiplayer games. Also games do not depend on receiving every package of game state – on the contrary when a package is retransmitted the game usually already received a newer package which supersedes the previously lost package. Therefore games tend to use the UDP protocol, which is basically the internet protocol itself just complemented with port numbers. On top of UDP games can implement TCP like functionality for features that require reliable data exchange (for example highscore lists).
UDP being a very low level protocol poses more additional challenges compared to TCP than just getting data from one computer to another reliably. UDP data transfer times should be measured to avoid sending more data than a connection can handle. Luckily though the client/server model makes it easy to work with varying data rates – a higher data rate reduces prediction errors on the client but it is not necessary to make a game work.
A possible future direction for multiplayer games are game streaming services which run the complete game including rendering on the server and directly return a video stream based on input data sent over the network. Apart from the data exchange which can easily be implemented independently of the games themselves games can be implemented like local multiplayer games and everything is always perfectly in sync and with ever improving video compression and network bandwidths streamed games can look good. But latency is as bad as in the basic client/server model or worse. As the reason for latency is mostly the distance between the connected computers game streaming services try to place a lot of servers around the world to minimize latency. This strategy typically results in latencies that some people consider good and some people consider unacceptable. In any case it is not an option for Virtual Reality games where super low latency is critical for a pleasant experience. Research projects like Square-Enix’s Shinra try to explore game streaming as an option to create previously impossible multiplayer games which for example calculate big physics systems which normally couldn’t be synced to clients efficiently. It remains to be seen whether game streaming services will catch on or will soon be forgotten the same way as most previous efforts which tried to replace local hardware power with server farms.