Modelling and segmenting subunits is one of the important topics in sign language study. Many scholars have proposed the functional definition to subunits from the view of linguistics while the problem of efficiently implementing it using computer vision techniques is a challenge. On the other hand,
Generating 3D architectural models based on hand motion and gesture
โ Scribed by Xiao Yi; Shengfeng Qin; Jinsheng Kang
- Publisher
- Elsevier Science
- Year
- 2009
- Tongue
- English
- Weight
- 820 KB
- Volume
- 60
- Category
- Article
- ISSN
- 0166-3615
No coin nor oath required. For personal study only.
โฆ Synopsis
Currently, we regard hand gestures as architectural hand signs. Given a hand sign, the corresponding motion sketches will provide geometric information for modeling. The hand sign is performed with the left hand and motion sketching is conducted by a Marker-Pen operated by the right hand. All these hand signs and sketching information will then be processed to create 3D models. This novel 3D architectural design modeling procedure is illustrated in Fig. 1.
With reference to the recent work on motion capture and recognition [1-14] and hand gesture-based 3D modeling [15][16][17][18][19][20][21], our work has the following contributions:
The design method provides a natural 3D design interface with two hand interactions and gestures, which will be interesting to interaction design researchers.
๐ SIMILAR VOLUMES
T hree-dimensional human head modeling is useful in video-conferencing or other virtual reality applications. However, manual construction of 3D models using CAD tools is often expensive and time-consuming. Here we present a robust and efficient method for the construction of a 3D human head model f