Wer kennt sie nicht, diese Bilder: halb real, halb unwirklich, manchmal beglückend, oft eher bedrohlich, Angst einflößend. Man will weglaufen, kann aber nicht; man ist wie gelähmt. Man will schreien, aber die Stimme versagt, bleibt stumm. Das Unheil ist ganz nah. Und dann wieder diese wunderbaren Situationen, mit lieben Menschen, vielleicht mit dem liebsten Menschen, mit dem man immer schon mal zusammen sein wollte. Helle Landschaften, bunt, auch bizarr, grotesk, fantastische Häuser, Städte, fantasievolle Kleidung …. unsere Träume.
Die einen nehmen sie nicht sehr ernst, lachen darüber, verdrängen sie; andere sehen die Träume als eine Sprache, in der uns unser Inneres etwas mitteilen möchte. Sie versuchen in den Träumen zu lesen, suchen Träume zu deuten. Vielfach wird angenommen, dass unsere Träume nicht aus dem Nichts kommen sondern ihre Inspiration und Energie aus dem beziehen, was uns im alltäglichen Leben beschäftigt. So eine Art Widerhall des Alltags, nicht 1:1, sondern kreativ verändert, verzerrt, überhöht, übertrieben, mit fantastischen Anteilen, die so im normalen Alltag nicht vorkommen würden. Und es scheinen besonders die Emotionen zu sein, die eine spezifische Antriebsenergie für Träume bilden, und hier die verdrängten oder unbewältigten Emotionen: Ängste, die uns umtreiben oder gar peinigen, traumatische Erlebnisse, oder unerfüllte Sehnsüchte, Bedürfnisse, unterdrückte Triebe, die eigentlich da sind, aber aus irgendwelchen Gründen nicht offen ausgelebt werden können. Sie alle sollen dann im Untergrund unseres Inneren ihr Unwesen treiben, uns im Traum peinigen, quälen, oder auch beglücken.
mehr:
- Der Traum vom künstlichen Geist (Gerd Doeben-Henisch, 25.04.2015)
A Robot Teaches Itself How to Walk [4:16]
Veröffentlicht am 27.10.2013
Starfish, a four-legged robot, walks by using an internal self-model it has developed and which it continuously improves. If it loses a limb, it can adapt its internal self-model. This experiment demonstrates how a legged robot automatically synthesizes a predictive model of its own topology (where and how its body parts are connected) through limited yet self-directed interaction with its environment, and then uses this model to synthesize successful new locomotive behavior before and after damage. The legged robot learned how to move forward based on only 16 brief self-directed interactions with its environment. These interactions were unrelated to the task of locomotion, driven only by the objective of disambiguating competing internal models.
Visit this link:
http://creativemachines.cornell.edu/e...
(Cornell Creative Machines Lab)
Josh Bongard, of the Department of Computer Science at the University of Vermont, and his colleagues Victor Zykov and Hod Lipson have created an artificial starfish that gradually develops an explicit internal self-model. Their four-legged machine uses actuation-sensation relationships to infer indirectly its own structure and then uses this self-model to generate forward locomotion. When part of its leg is removed, the machine adapts its self-model and generates alternative gaits — it learns to limp. It can restructure its body representation following the loss of a limb; thus, in a sense, it can learn. As its creators put it, it can "autonomously recover its own topology with little prior knowledge," by constantly optimizing the parameters of its resulting self-model. The starfish not only synthesizes an internal self-model but also uses it to generate intelligent behavior.
----------
This robot needs to learn how to walk. The challenge is that it does not know what it looks like, right? If you look at this robot, you can see it has four legs. But the robot has no clue what it looks like - it does not know if it's a snake, if it's a spider- it actually has no clue how these eight motors and legs are arranged. It does even know about legs! What does it do?
So imagine yourself sitting in the black box, no windows no nothing, just a black box. And all you have are 8 knobs, and as you turn the 8 knobs... these knobs are connected somehow through motors and as you turn these knobs, you can feel the box moving left and right. You don't know how the motors are connected with the morphology from the machine This is what this robot feels. So how does it learn to walk?
It can do trial and error, and move the knobs and try and guess your way. But an alternative is to do a kind of systematic exploration. So how does this robot work?
It begins by making a random motion, randomly moving the motors. okay this is called motor-babbling... Just moving the motors in a random way and then, it collects all the sensor information and forms hypotheses about what it might be. And what you can see in the top left are all the hypotheses, different shapes, different self-images that it came up with that explain its sensation with the relationship between activation and sensation.
It then looks for the next action to do, the next action to do that causes the most disagreements between predictions of these laws. Within a relatively small number of these babbling actions, it will figure out what it looks like.
Now with that self model, it can figure out how to move and because the model is pretty close to reality, what makes the model move will make the robot move in reality as well.
When you look at this robot, you have to remember that it was not programmed to move; it did not have a model of itself before it started nor did it do trials of walking before moving. So we kinda bypassed these three things and we allowed it, through this idea of self-modeling, to learn how do this task.
----------
Higher animals use some form of an internal model of themselves for planning complex actions and predicting their consequence, but it is not clear if and how these self-models are acquired or what form they take.
This kind of simulation happens in human brains all the time. A wide range of research suggests that action planning requires this kind of simulation in order to be implemented properly. We don't usually visualize actions explicitly and certainly not for the simple reaching tasks as above. The simulation precedes very quickly and unconsciously based on a wide source of proprioceptive and visual information.
While this simulation is usually unconscious, there are few domains (the high jumpers in Olympic track and field) in which it is used very explicitly, very consciously, as people engage in mental simulation.
Visit this link:
http://creativemachines.cornell.edu/e...
(Cornell Creative Machines Lab)
Josh Bongard, of the Department of Computer Science at the University of Vermont, and his colleagues Victor Zykov and Hod Lipson have created an artificial starfish that gradually develops an explicit internal self-model. Their four-legged machine uses actuation-sensation relationships to infer indirectly its own structure and then uses this self-model to generate forward locomotion. When part of its leg is removed, the machine adapts its self-model and generates alternative gaits — it learns to limp. It can restructure its body representation following the loss of a limb; thus, in a sense, it can learn. As its creators put it, it can "autonomously recover its own topology with little prior knowledge," by constantly optimizing the parameters of its resulting self-model. The starfish not only synthesizes an internal self-model but also uses it to generate intelligent behavior.
----------
This robot needs to learn how to walk. The challenge is that it does not know what it looks like, right? If you look at this robot, you can see it has four legs. But the robot has no clue what it looks like - it does not know if it's a snake, if it's a spider- it actually has no clue how these eight motors and legs are arranged. It does even know about legs! What does it do?
So imagine yourself sitting in the black box, no windows no nothing, just a black box. And all you have are 8 knobs, and as you turn the 8 knobs... these knobs are connected somehow through motors and as you turn these knobs, you can feel the box moving left and right. You don't know how the motors are connected with the morphology from the machine This is what this robot feels. So how does it learn to walk?
It can do trial and error, and move the knobs and try and guess your way. But an alternative is to do a kind of systematic exploration. So how does this robot work?
It begins by making a random motion, randomly moving the motors. okay this is called motor-babbling... Just moving the motors in a random way and then, it collects all the sensor information and forms hypotheses about what it might be. And what you can see in the top left are all the hypotheses, different shapes, different self-images that it came up with that explain its sensation with the relationship between activation and sensation.
It then looks for the next action to do, the next action to do that causes the most disagreements between predictions of these laws. Within a relatively small number of these babbling actions, it will figure out what it looks like.
Now with that self model, it can figure out how to move and because the model is pretty close to reality, what makes the model move will make the robot move in reality as well.
When you look at this robot, you have to remember that it was not programmed to move; it did not have a model of itself before it started nor did it do trials of walking before moving. So we kinda bypassed these three things and we allowed it, through this idea of self-modeling, to learn how do this task.
----------
Higher animals use some form of an internal model of themselves for planning complex actions and predicting their consequence, but it is not clear if and how these self-models are acquired or what form they take.
This kind of simulation happens in human brains all the time. A wide range of research suggests that action planning requires this kind of simulation in order to be implemented properly. We don't usually visualize actions explicitly and certainly not for the simple reaching tasks as above. The simulation precedes very quickly and unconsciously based on a wide source of proprioceptive and visual information.
While this simulation is usually unconscious, there are few domains (the high jumpers in Olympic track and field) in which it is used very explicitly, very consciously, as people engage in mental simulation.
Veröffentlicht am 18.03.2015
Full story: http://bit.ly/1MNJZCS
The human self has five components. Machines now have three of them. How far away is artificial consciousness – and what does it tell us about ourselves?
Keine Kommentare:
Kommentar veröffentlichen