About: Projection augmented model   Sponge Permalink

An Entity of Type : owl:Thing, within Data Space : 134.155.108.49:8890 associated with source dataset(s)

“We live between two realms: our physical environment and cyberspace. Despite our dual citizenship, the absence of seamless couplings between these parallel existences leaves a great divide between the worlds of [computer-generated] bits and [real physical] atoms. At present, we are torn between these parallel but disjoint spaces” (Ishii & Ullmer, 1997, p.234). These ‘spaces’ each have unique qualities that the other cannot provide. Real objects can be physically handled and naturally manipulated to be viewed from any direction, which is essential for ergonomic evaluation and provides a strong sense of palpability (Ishii & Ullmer, 1997). Although ‘simulated’ haptic feedback devices enable some aspects of computer-generated objects to be ‘touched’, they can not match this level of functiona

AttributesValues
rdfs:label
  • Projection augmented model
rdfs:comment
  • “We live between two realms: our physical environment and cyberspace. Despite our dual citizenship, the absence of seamless couplings between these parallel existences leaves a great divide between the worlds of [computer-generated] bits and [real physical] atoms. At present, we are torn between these parallel but disjoint spaces” (Ishii & Ullmer, 1997, p.234). These ‘spaces’ each have unique qualities that the other cannot provide. Real objects can be physically handled and naturally manipulated to be viewed from any direction, which is essential for ergonomic evaluation and provides a strong sense of palpability (Ishii & Ullmer, 1997). Although ‘simulated’ haptic feedback devices enable some aspects of computer-generated objects to be ‘touched’, they can not match this level of functiona
sameAs
dcterms:subject
dbkwik:mixedrealit...iPageUsesTemplate
abstract
  • “We live between two realms: our physical environment and cyberspace. Despite our dual citizenship, the absence of seamless couplings between these parallel existences leaves a great divide between the worlds of [computer-generated] bits and [real physical] atoms. At present, we are torn between these parallel but disjoint spaces” (Ishii & Ullmer, 1997, p.234). These ‘spaces’ each have unique qualities that the other cannot provide. Real objects can be physically handled and naturally manipulated to be viewed from any direction, which is essential for ergonomic evaluation and provides a strong sense of palpability (Ishii & Ullmer, 1997). Although ‘simulated’ haptic feedback devices enable some aspects of computer-generated objects to be ‘touched’, they can not match this level of functionality (Evans, Wallace, Cheshire & Sener, 2005; Baradaran & Stuerzlinger, 2005; Khoudja, Hafez & Kheddar, 2004). It is, therefore, unsurprising that physical objects are still used for many applications, such as product design (Dutson & Wood, 2005). However, computer-generated objects have a key advantage; they provide a level of flexibility that cannot be matched by physical objects. Therefore, a display is needed that somehow ‘joins’ the real physical world and computer-generated objects together, thus enabling them to be experienced simultaneously (Gibson, Gao & Campbell, 2004; Ishii & Ullmer, 1997). Tangible User Interfaces and Augmented Reality both aim to address this issue. A Tangible User Interface. (TUI) uses real physical objects to both represent and interact with computer-generated information (Figure 1). However, whilst TUIs create a physical ‘link’ between real and computer-generated objects, they do not create the illusion that the computer-generated objects are actually in a user’s real environment. That is the aim of Augmented Reality. Unlike Virtual Reality (VR), which immerses a user in a computer-generated environment (Burdea & Coffet, 2003; Brooks, 1999), Augmented Reality (AR) joins together physical and virtual ‘spaces’ by creating the illusion that computer-generated objects are actually real objects in a user’s environment (Azuma et al., 2001)(Figure 1). Furthermore, head-mounted-display based AR, and in fact VR, systems can directly incorporate physical objects. Thus, as a user reaches out to a computer-generated object that they can see, they touch an equivalent physical model that is placed at the same spatial location (Whitton, Lok, Insko & Brooks, 2005; Billingshurst, Grasset & Looser, 2005; Borst & Volz, 2005; Lee, Chen, Kim, Han & Pan, 2004; Hoffman, Garcia-Palacios, Carlin, Furness & Botella-Arbona, 2003). Such systems enable the computer-generated visual appearance of the object to be dynamically altered, whilst the physical model provides haptic feedback for the object’s underlying form. However, head-mounted-display based systems require users to wear equipment, which limits the number of people who can simultaneously use the display and prevents new people from spontaneously joining the group (Hirooka & Saito, 2006; Lee & Park, 2006). A variant of the AR paradigm that does not suffer from these limitations is Spatially Augmented Reality (Figure 1). Spatially Augmented Reality displays project computer-generated information directly into the user’s environment (Bimber & Raskar, 2005). Although there are several possible display configurations, the most natural type is the Projection Augmented model.
Alternative Linked Data Views: ODE     Raw Data in: CXML | CSV | RDF ( N-Triples N3/Turtle JSON XML ) | OData ( Atom JSON ) | Microdata ( JSON HTML) | JSON-LD    About   
This material is Open Knowledge   W3C Semantic Web Technology [RDF Data] Valid XHTML + RDFa
OpenLink Virtuoso version 07.20.3217, on Linux (x86_64-pc-linux-gnu), Standard Edition
Data on this page belongs to its respective rights holders.
Virtuoso Faceted Browser Copyright © 2009-2012 OpenLink Software