at1with0 wrote:Self-modifying code is self-aware already. It needs to be in order to be able to change itself.
Machines will achieve human-level intelligence by 2028 (median estimate: 10% chance), by 2050 (median estimate: 50% chance), or by 2150 (median estimate: 90% chance), according to an informal poll at the Future of Humanity Institute (FHI) Winter Intelligence conference on machine intelligence in January.
at1with0 wrote:Self-modifying entails self-awareness.
It has to be aware of what it is in order to change what it is.
Did I ever say that self-modifying code has human-like sentience?
Sentience and self-awareness are not identical.
Most experts would not consider that a robot is really selfaware
just because it can visually recognize its own motion
or itself in a mirror since a program specifically designed to
achieve that kind of recognition without having a genuine
awareness capacity can be developed. The ability to visually
recognize one-self is not enough for achieving selfawareness.
Self-recognition can be a side-effect of selfawareness,
but not a pre-requisite. We believe a robot needs
a capability to attend to its internal states in order to be selfaware.
Current approaches as described in the previous
section do not focus on directing a robot’s attention to its
own internal processes. If we add an attention process to a
robot so that it can focus on processes that happen internally
during self-recognition activities then we would consider it
to be self-aware. What is crucially important is not the
ability to recognize itself in a mirror (e.g. a visual inverted
reflection), but rather to be aware of its own emotions,
perceptions, beliefs and intentions during the recognition
process. If a robot has totally lost all of its outward facing
sensations, it may not be aware of its environment (external
awareness), however it can still be aware of itself (selfawareness).
Users browsing this forum: No registered users and 0 guests