The idea that you can build a machine that has self-awareness, consciousness and beliefs like humans do, is fallacious. It is not unlike the classical philosophical urge of asking questions formulated in relation to individuals as singles, abstracted from their species and not regarded within the Theory of Evolution, that is, at some stage of the human species’ evolution.
The universal truth that philosophy seeks is by definition impossible when you start viewing these questions from the point of view of a species that is evolving.
What about the early homo sapiens? What about the neanderthals?
Philosophy asks a question that is only meaningful in philosophical English, or German, or whatever. But that same question is not necessarily even reproducible in other languages. Classical philosophy – which is regrettably still acknowledged to this day (Martin Heidegger is an illustrious example of these charlatans) – is trying to capture a truth that holds universally and yet – the question itself and all our efforts to answer it – are pitifully confined within a specific language, a specific era, a specific stage of the human evolution.
In the same way, Strong AI is ill-conceived from its very premises, as the attempt is to create self-awareness in a singularity, so to speak. In a single agent, that is, that does not belong to a species and is not part of the world as we and all animals do.
That is why Strong AI is non-sense and the technological singularity isn’t gonna happen any time soon, or ever.