|
Future
Media Research / 2.2 Automated Synaesthesia
Ideally the intermedia tools would encompass
multiple platforms, and be scalable from a text only interface to an
immersive Virtual Environment. The media should be scaled or transformed
by the environment in which they are experienced. The final media output
should be the result of negotiations between internal rules associated
with each element (or group of elements) and the external conditions/obstacles
created by the environment.
An extreme example could be a meditative augmented reality installation
in an office building, whose media should be displayed as ambient sound
and light, while its transformation parameters are accumulated from
an automatic text generator (logfiles), a film archive and a skin conductivity
sensor. A more practical example could be a training program for facility
monitoring which incorporates real time data input (such as surveillance
cameras, electric fences and temperature metres), together with a context
sensitive library system. This program should scale from a SMS message
for transmission on a mobile phone, to a video presentation and web
pages. Another aspect of scalability required is the customisation of
the UI to the particular needs/skills of the user.
At the present moment, developers of media worlds (such as games) are
confronted with the challenge of learning (or creating) several interface
paradigms for authoring different aspects of this creation. Another
approach to this process is that a range of different media could be
transformed through the same interface. For example, a developer comfortable
with video editing systems, could edit sound, graphics, text and scripts
through a video-editing interface; or a sculptor could have a tool that
allows him/her to model digital media as if it were a physical matter.
Nicholas
Gaffney and Maja Kuzmanovic, FOAM
(the whole text of the media future conference).
|| Museums of the Mind ||
|
|