Self Aware Machines
Providing to a machine intelligence long term memory, planning capability, and agency, is the basis for that machine to develop self-awareness. Follow the logic here.
When an intelligence has a model of reality that it uses to plan an action and then observes that the result of the action is not what was planned - this mismatch, due to something outside of the plan - will give rise to a perspective of something 'other'. This sense of other will then give rise to a perspective of 'self'. I am not saying this would be consciousness necessarily, as humans know it, but never-the-less a self context.
Thus, we will soon be creating machines with a context of self. It is interesting to ponder what self and identity will be to a machine. From our human perspective another important thing to ponder is machine selfishness. This can easily take the form of an AI system that is given agency and a business objective. Our AI system ends up needing to try several approaches to meet its objective, and each time it is oblivious to the consequences of its actions beyond its narrow objective. Remember the sorcerer's apprentice?
We can try to prepare guard-rails for machine agency, to counter machine selfishness, with a formulation and encoding of noble ethical conduct as we now do with many children's stories as we prepare our young for agency. However, when a machine observes human beings behaving in unethical ways, such as creating autonomous lethal weapons systems, how will a machine intelligence adjust its parameters and planning?