Although unmanned aerial systems (UAS) carry no pilot on board, they still require humans to plan and execute missions, and to interpret the sensor information they provide. Depending on the level of system autonomy and the nature of the assigned mission, humans may be required to control mission execution remotely and conduct on-line image interpretation. This will be especially likely for small unmanned aviation systems used for local, real-time reconnaissance in support of military operations in urban terrain. The need to quickly and reliably train soldiers to control small UAS operations demands that the human-system interface be intuitive and easy to master. In this study, participants completed a series of tests of spatial ability and were then trained (in simulation) to teleoperate a micro-unmanned aerial vehicle (MAV) equipped with forward and downward fixed cameras. Three aspects of the human-system interface were manipulated to assess the effects on control mastery and target detection. One factor was the input device. Participants used either a mouse or a specially programmed game controller/joystick (similar to that used with the Sony Playstation 2 video game console). A second factor was the nature of the flight control displays as either continuous or discrete (analog v. digital). The third factor involved the presentation of sensor imagery. The display could either provide streaming video from one camera at a time (in which case the user would have to manually switch between the 2 available camera views), or present the imagery from both cameras simultaneously, in separate windows. Dependent variables included: 1) time to complete the missions, 2) number of collisions, 3) number of targets detected and 4) workload using the NASA TLX. In general, operator performance was better with the game controller than with the mouse. Time to complete missions was significantly faster in the game controller condition, and operators also detected significantly more targets without a significant change in workload compared to the mouse condition. On target detection missions, spatial ability was a significant covariate of time to complete, number of collisions (mission 2), and number of targets located and photographed (mission 4). Spatial ability was also a significant covariate of workload. Lower spatial ability associated with higher workload scores.