System Setup

The basis of the CDP is a custom-built multi-touch table. Together with the Chair of Industrial Design aspects such as ergonomics and working methods were examined and incorporated into the table’s design.

A special aspect of our Tangible User Interface (TUI) is the ability to automatically capture 3D objects. This is what facilitates the seamless interface between the digital tool and the architect’s familiar way of working by making it possible for a physical working model, as commonly used by architects, to interact directly with interactive design-supporting simulations and analyses in real time.

In contrast to typical Tangible User Interfaces, the objects are not solely used as an adaptation of the control system (whereby the geometry of the object can usually be ignored – see also Urp (Underkoffler and Ishii 1999)). Through the markerless, direct connection between the physical and digital worlds, the analogue objects are connected to the simulation not just in two dimensions but also as whole volumes, and as such become direct participants in the digital design scenario. A working model made of rigid styrodur foam is automatically scanned in three dimensions and incorporated into the 3D city model. The data basis for the semantic 3D city model is based on an Oracle database in City GML format. Using this newly created digital model, various analyses and simulations can be calculated and the results displayed. Changes to the form of the styrodur blocks, such as when they are trimmed or shaped, or changes to their position are updated directly in the scene, the simulation updating accordingly in real time.

The interactive table (158 cm × 96 cm) has a matt projection surface (A) onto which an image is projected from beneath. The high-resolution (1920 × 1080) projected image (B) is reflected by a mirror (C). The projection surface is additionally illuminated by infrared rays (D). An infrared camera (E) takes a picture of the underside of the projection surface as reflected in the mirror (C). The IR camera image captures objects and interactions that touch the surface. A computer (F) processes the camera data and creates a projection image for the projector (B). The automatic 3D object recognition is achieved using an IR camera (E) in combination with a Microsoft Kinect Camera (I). Parallel to this, a second beamer (G), that projects onto the screen (H), makes it possible to display further contextual information for the design process such as perspectives or functional diagrams. To provide a better indication of the spatial characteristics, it is also possible to produce true three-dimensional representations of the design.


A: Projection Surface
B: Projector / Optoma
C: Mirror
D: Infrared Rays
E: Infrared Camera / Firefly
F: Computer
G: Projector
H: Screen
I: Microsoft Kinect