A Self-Taught Chess Monster
Is this the next wave in computer chess (and perhaps not only chess)? London programmer Matthew Lai wrote a program, which in turn taught itself to play chess (I think they mean to play well), and in three days of "self-programming" it managed to reach IM strength. Contrary to what the article seems to suggest, that is a long, long way from the strength of the best programs (it's 800+ points away), but it's still an impressive accomplishment, especially if this is just the first go-round for the approach.
But let's not worry too much. As long as we're nice to our coming computer overlords, they'll at least put (some of) us in people zoos. (For long-time readers of the blog, fear not: I remain a substance dualist. But even if I'm right and robots only simulate intelligence and are never conscious, they could still be just as dangerous as if they were.)
HT: David Korn
Reader Comments (3)
..."I remain a substance dualist."
You mean like writing about chess on your football blog?
[DM: Something like that.]
Anyone interested in self-teaching AI might like this brief YouTube presentation: https://www.youtube.com/watch?v=rbsqaJwpu6A . Go to ~10'00'' for some fun and impressive examples of a program "figuring out" basic computer games. True, they are fairly simple, but as noted at the end, the same concept could readily be extended to more complicated games like chess. All you need to do is develop the right self-correcting mechanism and then simply let the program run. It is encouraging to see some success already, particularly because this kind of technology, as the presenter points out, will be exceedingly useful in many other areas as well.
I am always surprised when I hear people talk about computers taking over the world after they reach a certain level of consciousness. I disagree with the notion that greed and control are inevitable to conscious beings. We tend to think that conscious machines would in effect become another category of humans. I guess this is the hubris of humanity.
[DM: I tend to think it's just the opposite. Our hubris comes in thinking that if beings are really, really smart - as so-called thought leaders tend to fancy themselves - well then they will *of course* be good and honorable. It seems to me that the ubiquity of violence (in humans as well as animals), generally done in the narrow self-interest of the individual or one's tribe, offers a presumption against utopianism. Additionally, various studies have shown that human beings with limited/stunted emotions and who operate far more based on (apparent) rationality than most members of the population tend to behave far worse. We need empathy to keep society together.
That said, those are concerned about what might happen if and when some sort of super-intelligence arrives (which may or may not be conscious - it really doesn't matter) do not assume the worst outcome, but only that it is a reasonable possibility that ought to be addressed and dealt with now, while it's still possible to do so.]