Remote visualization refers to any visualization where some or all of the data is on a different machine from the one used to view images.

The main reason to do remote visualization is that it can take too long to transfer data from the remote site. The adage "bring computing to the data" holds for visualization as much as for raw computation. There are other reasons, too, including security restrictions on data transfer. It may be permissible to visualize part of a dataset or some view of it when it would not be acceptable to download the entire dataset.

Another reason to visualize on a remote resource is that the data may be too large for the capabilities of a single workstation. It may be too large to fit on disk or it may be of such complexity that only aggregation of resources, the use of more GPUs, will help to understand it. This kind of work requires specialized hardware and functionality that a remote visualization facility might provide.

Responsiveness of the network between you and the remote facility is crucial to remote visualization. The standard way to visualize remotely is to use VNC (Virtual Network Computing) or some other terminal program to transfer images of a whole desktop from the remote machine to the local one. This requires sending 1280x1024 pixels of 12 bytes each, 24 times a second, which is 360 Megabytes per second uncompressed. The good news is that, with compression, many remote connections can handle this.

There are more subtle network challenges, too. Even if a network has bandwidth to send frames of video, the network latency determines how long it takes for mouse clicks to travel from the user's terminal to the remote site and for the changed display to travel back. This latency is noticeable when using VNC. Another subtlety is that, while bandwidth and latency can be acceptably quick, there may be times when they are a little slow because of other network traffic. High quality of service (QoS) goes a long way towards making remote rendering feel responsive.

Remote rendering models are characterized by the point in the visualization pipeline at which the data is transferred from the remote site to the local site. For instance, the remote site might read a finite element mesh, find the surface of that mesh, then send the surface data, as points in space and connections among them, to the local machine. The local machine would decide how to color and transform those points into polygons for display.

The increase in connection speeds has made it more practical to execute the whole visualization pipeline on the remote machine and view it with a remote desktop application. This is a direct and effective solution because it requires no change to visualization applications. It also lets users take advantage of specialized hardware on the remote site to construct the images. Centralized construction of the final image can also make it easier to collaborate. Multiple users at different sites can connect to the same remote visualization.

 
©  |   Cornell University    |   Center for Advanced Computing    |   Copyright Statement    |   Inclusivity Statement