I think we are already not sure on what goes on in the processing circuits. The angle of your question is right. We might also ponder our own machine-like construction and assume our own awareness with more caution. There have been arguments we have direct access to other minds (Zemach), but though I believe the private is more public than most think, we obviously infer as you say. It's a tough one at both ends. My dog does not recognise itself in a mirror, but we could program a robot to do this. And what if we transfer our minds to substrate independent status in a robot?
-- Broadly, Continental philosophy often sees human beings as essentially social beings. We are thought to exist at our deepest level in and as a community. We depend on others not merely for our existence, but for our very sense of ourselves, and our awareness of others is claimed to be at the heart of our awareness of ourselves. Opposed to this view are those who see each of us as aware of ourselves and our experience in a way that we can never be with respect to any other human being. Self enclosed, we are seen as needing to reach an understanding of the inner lives of others, somehow, on the basis of our own unique awareness of our inner lives. However, this denies us the comfort of a more direct closeness. We live forever with a gap between ourselves and others.
.Zemach, E., 1966, "Sensations, Raw Feels, and Other Minds" The Review of Metaphysics, XX: 317–340.
On Monday, March 2, 2015 at 10:20:44 AM UTC, RP Singh wrote:
On Monday, March 2, 2015 at 10:20:44 AM UTC, RP Singh wrote:
Neil, how can we be so sure, awareness in others is inferred just because we tend to relate to us. We are aware and so we infer that organisms which show certain signs must be like us. Maybe in the future when robots walk around and talk to us, we could not be that sure, there is always a possibility, isn't there?On Mon, Mar 2, 2015 at 3:39 PM, archytas <nwterry@gmail.com> wrote:Definitely not like us RP - though we aren't that sure how we process the external either. No machine has yet woken up to speak to me - but they are doing things I don't understand and producing results we haven't thought of in ways we can't work out the why of. We can program them to relate to sound, sight, smell, touch and taste (and some other sensing) - but sentience is missing. They can learn from sensor input.
On Monday, March 2, 2015 at 9:46:30 AM UTC, RP Singh wrote:Neil, are robots aware of sights and sounds like us or do they just recognise such things without awareness?On Mon, Mar 2, 2015 at 1:34 PM, archytas <nwterry@gmail.com> wrote:I'm not sure it has to do anything much to us Allan - though potentially it changes everything. The machines could soon be biological - they can already record information as DNA. Corrupting programs might be stopped by surveillance routines. We could look at this as human, even soul enhancement and as educational.
On Monday, 2 March 2015 07:35:34 UTC, Allan Heretic wrote:AI sounds cool.. several problems though it would be easy to program violence in, the manipulation show with out a chip is going to suddenly change with a chip. RIGHT!
The other problem is the soul.. and the mix or no soul pure AI will it contain a soul?
تجنب. القتل والاغتصاب واستعباد الآخرين
Avoid; murder, rape and enslavement of others
-----Original Message-----
From: archytas <nwterry@gmail.com>
To: minds-eye@googlegroups.com
Sent: Mon, 02 Mar 2015 8:09 AM
Subject: Mind's Eye Moral EnhancementHumans developed to live in small communities - we were pretty murderous in them and you now are exposed to only a tenth of the chance of dying a violent death. We are not well-equipped for today's global circumstances. He are not much good at large scale collective moral problems. Moral enhancement in traditional form has been about education, religion or short term drugs and lobotomy-type intervention. Artificial intelligence is another possibility.--Far from proceeding in the rational way set as an ideal, most of our moral views and decisions are made on immediate intuition, emotional response and gut reactions. Reasoning, if we do it at all, is often just rationalisation of what we intuitively thought anyway. To overcome our biological and psychological limitations, we could develop moral artificial intelligence.Many are very scared of this, perhaps because they know they are not strong moral agents. Some think such machines would recognise us for what we are (a danger to the planet) and kill us off. Given our potential to do this to each other, I'm dismissive of the machine problem. MIA could monitor a lot more than we manage as humans and point out personal bias and advise on the right course of action according to human moral values. Agent-tailored MIA would preserve moral pluralism and help the individual's autonomy by removing the restriction of her psychology.I have volunteered Gabby for the first MIA chip (no wait, that was Cartman with the V chip in South Park). In fact, AI is a;ready helping with a lot of learning. We are introducing AI into fraud management systems with patents being filed - http://www.freepatentsonline.com/20150032589.pdf - car driving, medical and dental analysis, narrative generation in entertainment - http://eprints.hud.ac.uk/23153/1/118.pdf - Big Data will drive Big HPC and Complex Analytics. Supercomputers of the future will need to: (1) Quantify the uncertainty associated with the behaviour of complex systems-of-systems (e.g. hurricanes, nuclear disaster, seismic exploration, engineering design) and thereby predict outcomes (e.g. impact of intervention actions, business implications of design choices); (2) Learn and refine underlying models based on constant monitoring and past outcomes; and (3) Provide real-time interactive visualization and accommodate "what if" questions in real-time. This will require an evolution in algorithm and system design, as well as even chip architectures to manage the power-performance trade-offs needed to attain a new era of Cognitive Supercomputing.Heads in the sand on this folks? Or would you have the "implant" like me if one was available?
---
You received this message because you are subscribed to the Google Groups ""Minds Eye"" group.
To unsubscribe from this group and stop receiving emails from it, send an email to minds-eye+unsubscribe@googlegroups.com .
For more options, visit https://groups.google.com/d/optout .
--
---
You received this message because you are subscribed to the Google Groups ""Minds Eye"" group.
To unsubscribe from this group and stop receiving emails from it, send an email to minds-eye+unsubscribe@googlegroups.com .
For more options, visit https://groups.google.com/d/optout .
--
---
You received this message because you are subscribed to the Google Groups ""Minds Eye"" group.
To unsubscribe from this group and stop receiving emails from it, send an email to minds-eye+unsubscribe@googlegroups.com .
For more options, visit https://groups.google.com/d/optout .
---
You received this message because you are subscribed to the Google Groups ""Minds Eye"" group.
To unsubscribe from this group and stop receiving emails from it, send an email to minds-eye+unsubscribe@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


0 comentários:
Postar um comentário