Hang on a second while we grab that post for you.
On this day Eugene Goostman successfully convinced a third of a select committee at London’s Royal Society that he was a 13 year old boy.
Eugene is a computer programme - the first to pass the iconic Turing Test.
The test was devised in 1950 by computer science pioneer and Second World War codebreaker Alan Turing, who said that if a machine was indistinguishable from a human, then it was ”thinking”.
If a computer is mistaken for a human more than 30% of the time during a series of five minute keyboard conversations it passes the test. No computer has ever achieved this, until now. Eugene managed to convince 33% of the human judges that it was human.
A computer that can trick us into thinking that someone or something is a person we can trust presents all sorts of cyber crime traps, but let’s not dampen the mood with fears and enjoy this historic moment.
Drone police to crack down on graffiti artists.
The BBC reports that Germany’s national railway company, Deutsche Bahn, plans to test small drones to try to reduce the amount of graffiti being sprayed on its property.
For a book on AI (Artificial Intelligence) ‘A working theory of love’ sure has a lot of sex in it. The story is about Neil Bassett, a troubled soul, who is trying to reincarnate his dead father (Dr. Bassett) in a talking computer. Neil is in his late thirties, a divorcé, geeky, slightly depressed and just a bit confused at times. He has also three women yearning for his attention (one of them is his ex-wife), which is why he ends up with a garbled love life that leads him right into the inner circles of a bizarre sex cult.
AWTOL is the successful combination of real science and fictional narrative. Beautifully written with passages that are like poetry, this is an intelligent and compelling read. I highly recommend it.
You can read the first few pages here.
Below is my interview with the author, Scott Hutchins.
alexob: What inspired the idea for this story?
Hutchins: It began very observationally—me creating a repository for the daily bits of life in San Francisco. Then to do San Francisco, I had to bring in Silicon Valley. Pretty soon I had a story of a man living a dismissible life in San Francisco while trying to create the world’s first intelligent computer using his dead father’s diaries. The very philosophical questions—What is a human? What does it mean to think?—were enormously interesting to me. They allowed me access to a deeper level of inquiry in a book that has a fair amount of comedy of manners.
alexob: Without giving it away: what was the most challenging part of the book to write?
Hutchins: The conversations between the computer—aka Dr. Bassett—and the narrator, Neill Bassett Jr. I kept telling myself, “This is ludicrous. No one is going to accept this.” So I worked and I worked. I researched, I read. I experimented. Ultimately, those conversations are what many people love the most.
alexob: If you had a time machine what would you do with it?
Hutchins: I would probably travel into the immediate past. I’d love to know my parents as contemporaries, for instance. And I might range further in the past—meet Pascal. But as for the future, I’d rather speculate from my privileged place of ignorance. I don’t want to travel to a world where everyone I know is dead. Of course, I could go far ahead and come back, but I’ve read my H. G. Wells. I know things don’t turn out so well.
alexob: In the future humans and machines [fill in blank]
Hutchins: …. will be best friends forever.
alexob: Yoda or Spock?
Hutchins: What a brilliant question! This sums up so many of my struggles. Eastern or Western? Wisdom or intellect? The answer is that I’d like to be Yoda, but I suspect I’m Vulcan.
So It Begins: Darpa Sets Out to Make Computers That Can Teach Themselves
Machine learning is how a computer (yellow) carries out a new task (red). The program adds its prior training (green), makes predictions, and completes the task. The result: the machine gets smarter. Illustration: Darpa
Creepy or Scary? BigDog - Pentagon’s robot is demanding some respect.
Oh, I am really excited about this: The Food and Drug Administration (FDA) has approved the Argus II artificial retina to restore partial sight to patients who are blind. The device, sometimes dubbed a bionic eye, was designed at the Lawrence Livermore National Laboratory.
Humanoid robot in development that looks and acts like a one year old kid.
Its facial features and movements are incredibly humanlike. Check it out! Eerie.
In my research for a recent article on the future of city living I came across a very exciting new forecast.
As most of us know, in September this year the senate in the State of California unanimously passed a bill favouring the driverless car on its roads but it’s the forecast by the Institute of Electrical and Electronics Engineers (IEEE) that is more exciting. It goes as far as stating that driverless vehicles are the most promising form of intelligent transportation in the future and have predicted that up to seventy-five percent of cars on the road in 2040 will be of the driverless kind. Undoubtedly, these vehicles will change the driving infrastructure. Attitudes may also change once self-driving cars are integrated into our day to day. It is assumed that they will be programmed and regulated by a central hub, which will know each cars’ location and destination and adjust their speeds to optimise traffic flow and avoid crashes.
What does this mean for us and our future? We won’t ever have to worry about our grandchildren speeding on the roads. They won’t be able to as they are likely to never have learnt to drive a car. They won’t need to with the computerised mobility systems in the future. Furthermore, if cars no longer need humans to pilot them, the IEEE suggests that driver’s licenses may come to seem redundant.
Developed by Associate Professor Devavrat Shah and his student, Stanislav Nikolov at the MIT, it can with 95 percent accuracy, predict which topics will trend an average of an hour and a half before Twitter’s algorithm puts them on the list — and sometimes as much as four or five hours before.
It combs through data in a sample set — in this case, data about topics that previously did and did not trend — and tries to find meaningful patterns. What distinguishes it is that it’s nonparametric, meaning that it makes no assumptions about the shape of patterns.
The algorithm could be of great interest to Twitter, which could charge a premium for ads linked to popular topics, but it also represents a new approach to statistical analysis that could, in theory, apply to any quantity that varies over time: the duration of a bus ride, ticket sales for films, maybe even stock prices.
An artificially intelligent virtual gamer created by computer scientists at The University of Texas at Austin has won the BotPrize by convincing a panel of judges that it was more human-like than half the humans it competed against. The competition was sponsored by 2K Games and was set inside the virtual world of “Unreal Tournament 2004,” a first-person shooter video game. The winners were announced this month at the IEEE Conference on Computational Intelligence and Games. “The idea is to evaluate how we can make game bots, which are nonplayer characters (NPCs) controlled by AI algorithms, appear as human as possible,” said Risto Miikkulainen, professor of computer science in the College of Natural Sciences. Miikkulainen created the bot, called the UT^2 game bot, with doctoral students Jacob Schrum and Igor Karpov.
Teaching a computer how to lip read isn’t science fiction, it’s a reality and it’s happening right now in Malaysia. Here, researchers at the International University in Selangor are teaching a computer to interpret human emotions based on lip patterns in order to improve the way people interact with computer.
The scientists developed their system using a genetic algorithm that improves with each iteration to match irregular ellipse fitting equations to the shape of a human mouth displaying different emotions. The team used photos of individuals from South-East Asia and Japan to train the computer to recognize the six commonly accepted human emotions — happiness, sadness, fear, anger, disgust, surprise — and neutrality. The algorithm then analyzed the upper and lower lip as two separate ellipses.
In the current study, the researchers’ algorithm successfully classified all seven emotions, along with a neutral expression. The researchers suggested that initial applications of such an emotion detector could involve, for instance, helping disabled patients lacking speech to interact more effectively with computer-based communication devices.
Image Credit: Dmitriy Kiryushchenkov / Shutterstock
June 23rd was the centenary of the birth of Alan Turing, father of computer science and artificial intelligence, who committed suicide just shy of 42. (Kings College, University of Cambridge).
View a short video bio here.