By SI Filings Apr 18 2019 Artificial IntelligenceTechnology"When machines begin mimicking human intelligence they can potentially be engaging in moral behavior, making them artificial moral agents (AMAs)." - TGC 589 reads There are 2 Comments "if" Bert Perry - Thu, 04/18/2019 - 12:36pm It's worth noting--I just did a review at work for applications of AI vis-a-vis storage and memory requirements--that at this point, AI does not appear to be anywhere near the point of "mimicking human intelligence." Rather, what's really going on is that instead of the traditional "Von Neumann machine" that is the computer you're using to read SI (or your cell phone/whatever), AI applications distribute processing and memory in a way that mimics, in some ways, how real nervous systems work. Key to that is a "learning function", which is a fancy way of saying that there's an algorithm that decides how the system will respond to its inputs. Which is a long way of saying that we are, thankfully, a long way off from the scenario of "Terminator" or "The Incredibles". The human nervous system has a fascinating capability that can't be done on silicon--it can rewire itself. We would have a "learning function", of course, and you can approximate this by providing certain features, but we are, for the likely future, "still in control". (the old joke is that when you're presenting a paper on AI or neural networks, your second slide is an image of neurons, which you gloss over on your way to talking about what you really came to talk about) Aspiring to be a stick in the mud. Mimic Aaron Blumer - Fri, 04/19/2019 - 7:32am Depends on what you mean by "mimic." Humans do all kinds of "if A, then B, else C" reasoning almost constantly. The article goes on to talk about machine learning and advances in that. What needs more attention in the debate is not really degrees of "intelligence" (which is increasingly just a word for logical complexity) but what constitutes a person. In Sci Fi, much is made of "sentience" or "consciousness," but in a materialist/naturalist paradigm, they really can't explain what makes a person a person. But even short of a AI that has alleged personhood, there is a lot of ethics to wrestle with. We're seeing (mostly unwarranted) controversy about this already in policing where AI is used in facial recognition and crime hot-spot prediction. Because machines crunching lots of data and doing lots of complex calculations are involved, some instinctively think "predictive policing" is dangerous. The kernel of truth is that there is a trend of turning more and more analysis over to machines. If that trends continues... where does it eventually go? It's a legitimate question. Personally, I'm more concerned about what sort of weak mindedness we're building into ourselves with this trend. When are we using machines to do thinking we can't do ourselves vs. thinking we just don't feel like doing/is too expensive to do ourselves?