ZIP TUBE
KINETIC SCULPTURE
Origami Installation
Cooperative
Sept 2016-Nov. 2016
INSTRUCTOR
Guvenc Ozel
Benjamin Ennemoser & Mertcan Buyuksandalyaci
ACADEMIC YEAR
2016 Fall
CONSTRUCTION SYSTEM
Paper
TOOLS
KUKA Robots, Laser Cut, Arduino, Pixy CMUcam
COOPERATOR
Tatar Huma Nazli
Akitopu Alara
In this project, we aim to build a kinetic sculpture which can interact with the environment and human activities. The movement of sculpture base on the materials properties and it can transform into more than three distinct moments.
We are inspired by Zippered Tube, a research about Origami
Evgueni T. Filipov , Tomohiro Tachi , and Glaucio H. Paulino
Origami tubes assembled into stiff, yet reconfigurable structures and metamaterials
Department of Civil and Environmental Engineering, University of Illinois at Urbana–Champaign, Urbana, IL 61801;
Graduate School of Arts and Sciences, University of Tokyo, Tokyo 153-8902, Japan; and
School of Civil and Environmental Engineering, Georgia Institute of Technology, Atlanta, GA 30332
We use this Zip Tube as single unit to construct the sculpture and take the advantage of the properties from folding paper. Beside that, we develop some variations from this single unit.
Mid-term Approach
After developing the language from ZipTube Origami, We fabricate a full size paper model with existing geometry and variations. Then we use animation to simulate how it is driven by robot arm and transform.
With the preprogram robot, we can easily test the performance of different parts of the whole model and how they contribute to the kinetic system.
Mechanical Research
After midterm, we are no longer to drive the system with external force process, instead, we start working on open-source electronic prototyping platform, that we can enable the object respond to the environment.
In order to build a cyber-physical model, we need a mechanical system to drive the whole origami system moving. So we install a stepper motor under the center tube, and change the shape of bottom loop. Than we can motivate the whole model by the property of origami system.
View from embedded camera
Machine Vision
By the device called PixyCam, we could 'teach' robot to remember different objects with their shape and color. Then we develop a machine vision system combining arduino, enabling the cyberphysical model to interact with the movement of different objects.
Eventually we could use different object to guide the motion of origami cyber-physical model.
Final Video