The Space Station Training Facility (SSTF) is an environment to train astronauts for Space Station Freedom (SSF) operations. Like most simulators, the SSTF includes an image generator (IG) that visually simulates the out-the-window scene and the closed circuit TV views of SSF. Because the displayed objects are graphic images, they can merge and violate their abstraction of solid objects. Most IGs provide the capability to detect collisions between predefined points and/or volumes. The application program sends an object position to the IG, the IG determines if the object has collided with other predefined objects, reports to the application and draws the object in its new position. The problem is in the delay associated with the IG processing time to compute the contact position and draw the object. By the time the IG reports the contact point to the application, the object has already merged with another object. For most applications this slight merging of objects is not perceivable to the student since normal viewing distances are large compared to object size and amount of overlap. For telerobotic training, the eyepoints can be so close that object merging can cause negative training. One solution is to check for contact before the application sends the object's position to the IG. This requires precise geometric models of the end-effectors and their workpieces to reside in the host computer. Additional computing resources are also required to run the contact detection algorithm. Rapid prototyping was used to verify the solution approach and reveal limitations in the design. This paper presents the contact detection and motion modification algorithm and discusses its capabilities and limitations.