Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Artificial Intelligence for Predictive and Evidence Based Architecture Design

Bhatt, Mehul ; Suchan, Jakob ; Schultz, Carl ; Kondyli, Vasiliki LU and Goyal, Saurabh (2016) In Proceedings of the AAAI Conference on Artificial Intelligence 30(1).
Abstract
The evidence-based analysis of people's navigation and wayfinding behaviour in large-scale built-up environments (e.g., hospitals, airports) encompasses the measurement and qualitative analysis of a range of aspects including people's visual perception in new and familiar surroundings, their decision-making procedures and intentions, the affordances of the environment itself, etc. In our research on large-scale evidence-based qualitative analysis of wayfinding behaviour, we construe visual perception and navigation in built-up environments as a dynamic narrative construction process of movement and exploration driven by situation-dependent goals, guided by visual aids such as signage and landmarks, and influenced by environmental (e.g.,... (More)
The evidence-based analysis of people's navigation and wayfinding behaviour in large-scale built-up environments (e.g., hospitals, airports) encompasses the measurement and qualitative analysis of a range of aspects including people's visual perception in new and familiar surroundings, their decision-making procedures and intentions, the affordances of the environment itself, etc. In our research on large-scale evidence-based qualitative analysis of wayfinding behaviour, we construe visual perception and navigation in built-up environments as a dynamic narrative construction process of movement and exploration driven by situation-dependent goals, guided by visual aids such as signage and landmarks, and influenced by environmental (e.g., presence of other people, time of day, lighting) and personal (e.g., age, physical attributes) factors. We employ a range of sensors for measuring the embodied visuo-locomotive experience of building users: eye-tracking, egocentric gaze analysis, external camera based visual analysis to interpret fine-grained behaviour (e.g., stopping, looking around, interacting with other people), and also manual observations made by human experimenters. Observations are processed, analysed, and integrated in a holistic model of the visuo-locomotive narrative experience at the individual and group level. Our model also combines embodied visual perception analysis with analysis of the structure and layout of the environment (e.g., topology, routes, isovists) computed from available 3D models of the building. In this framework, abstract regions like the visibility space, regions of attention, eye movement clusters, are treated as first class visuo-spatial and iconic objects that can be used for interpreting the visual experience of subjects in a high-level qualitative manner. The final integrated analysis of the wayfinding experience is such that it can even be presented in a virtual reality environment thereby providing an immersive experience (e.g., using tools such as the Oculus Rift) of the qualitative analysis for single participants, as well as for a combined analysis of large group. This capability is especially important for experiments in post-occupancy analysis of building performance. Our construction of indoor wayfinding experience as a form of moving image analysis centralizes the role and influence of perceptual visuo-spatial characteristics and morphological features of the built environment into the discourse on wayfinding research. We will demonstrate the impact of this work with several case-studies, particularly focussing on a large-scale experiment conducted at the New Parkland Hospital in Dallas Texas, USA. (Less)
Please use this url to cite or link to this publication:
author
; ; ; and
publishing date
type
Contribution to journal
publication status
published
subject
keywords
applied artificial intelligence, visual perception, architectural cognition
in
Proceedings of the AAAI Conference on Artificial Intelligence
volume
30
issue
1
external identifiers
  • scopus:85007165449
DOI
10.1609/aaai.v30i1.9850
language
English
LU publication?
no
id
0ba350a6-9084-4a40-bc31-362280f5916a
alternative location
https://ojs.aaai.org/index.php/AAAI/article/view/9850
date added to LUP
2024-12-18 15:22:54
date last changed
2025-04-04 15:30:05
@article{0ba350a6-9084-4a40-bc31-362280f5916a,
  abstract     = {{The evidence-based analysis of people's navigation and wayfinding behaviour in large-scale built-up environments (e.g., hospitals, airports) encompasses the measurement and qualitative analysis of a range of aspects including people's visual perception in new and familiar surroundings, their decision-making procedures and intentions, the affordances of the environment itself, etc. In our research on large-scale evidence-based qualitative analysis of wayfinding behaviour, we construe visual perception and navigation in built-up environments as a dynamic narrative construction process of movement and exploration driven by situation-dependent goals, guided by visual aids such as signage and landmarks, and influenced by environmental (e.g., presence of other people, time of day, lighting) and personal (e.g., age, physical attributes) factors. We employ a range of sensors for measuring the embodied visuo-locomotive experience of building users: eye-tracking, egocentric gaze analysis, external camera based visual analysis to interpret fine-grained behaviour (e.g., stopping, looking around, interacting with other people), and also manual observations made by human experimenters. Observations are processed, analysed, and integrated in a holistic model of the visuo-locomotive narrative experience at the individual and group level. Our model also combines embodied visual perception analysis with analysis of the structure and layout of the environment (e.g., topology, routes, isovists) computed from available 3D models of the building. In this framework, abstract regions like the visibility space, regions of attention, eye movement clusters, are treated as first class visuo-spatial and iconic objects that can be used for interpreting the visual experience of subjects in a high-level qualitative manner. The final integrated analysis of the wayfinding experience is such that it can even be presented in a virtual reality environment thereby providing an immersive experience (e.g., using tools such as the Oculus Rift) of the qualitative analysis for single participants, as well as for a combined analysis of large group. This capability is especially important for experiments in post-occupancy analysis of building performance. Our construction of indoor wayfinding experience as a form of moving image analysis centralizes the role and influence of perceptual visuo-spatial characteristics and morphological features of the built environment into the discourse on wayfinding research. We will demonstrate the impact of this work with several case-studies, particularly focussing on a large-scale experiment conducted at the New Parkland Hospital in Dallas Texas, USA.}},
  author       = {{Bhatt, Mehul and Suchan, Jakob and Schultz, Carl and Kondyli, Vasiliki and Goyal, Saurabh}},
  keywords     = {{applied artificial intelligence; visual perception; architectural cognition}},
  language     = {{eng}},
  month        = {{03}},
  number       = {{1}},
  series       = {{Proceedings of the AAAI Conference on Artificial Intelligence}},
  title        = {{Artificial Intelligence for Predictive and Evidence Based Architecture Design}},
  url          = {{http://dx.doi.org/10.1609/aaai.v30i1.9850}},
  doi          = {{10.1609/aaai.v30i1.9850}},
  volume       = {{30}},
  year         = {{2016}},
}