Category Archives: Artificial Intelligence

Artificial Intelligence Is the Most Important Technology of the Future

Artificial Intelligence is a set of tools that are driving forward key parts of the futurist agenda, sometimes at a rapid clip. The last few years have seen a slew of surprising advances: the IBM supercomputer Watson, which beat two champions of Jeopardy!; self-driving cars that have logged over 300,000 accident-free miles and are officially legal in three states; and statistical learning techniques are conducting pattern recognition on complex data sets from consumer interests to trillions of images. In this post, I’ll bring you up to speed on what is happening in AI today, and talk about potential future applications. Any brief overview of AI will be necessarily incomplete, but I’ll be describing a few of the most exciting items.

The key applications of Artificial Intelligence are in any area that involves more data than humans can handle on our own, but which involves decisions simple enough that an AI can get somewhere with it. Big data, lots of little rote operations that add up to something useful. An example is image recognition; by doing rigorous, repetitive, low-level calculations on image features, we now have services like Google Goggles, where you take an image of something, say a landmark, and Google tries to recognize what it is. Services like these are the first stirrings of Augmented Reality (AR).

It’s easy to see how this kind of image recognition can be applied to repetitive tasks in biological research. One such difficult task is in brain mapping, an area that underlies dozens of transhumanist goals. The leader in this area is Sebastian Seung at MIT, who develops software to automatically determine the shape of neurons and locate synapses. Seung developed a fundamentally new kind of computer vision for automating work towards building connectomes, which detail the connections between all neurons. These are a key step to building computers that simulate the human brain.

As an example of how difficult it is to build a connectome without AI, consider the case of the flatworm, C. elegans, the only completed connectome to date. Although electron microscopy was used to exhaustively map the brain of this flatworm in the 1970s and 80s, it took more than a decade of work to piece this data into a full map of the flatworm’s brain. This is despite that brain containing just 7000 connections between 300 neurons. By comparison, the human brain contains 100 trillion connections between 100 billion neurons. Without sophisticated AI, mapping it will be hopeless.

There’s another closely related area that depends on AI to make progress; cognitive prostheses. These are brain implants that can perform the role of a part of the brain that has been damaged. Imagine a prosthesis that restores crucial memories to Alzheimer’s patients. The feasibility of a prosthesis of the hippocampus, part of the brain responsible for memory, was proven recently by Theodore Berger at the University of Southern California. A rat with its hippocampus chemically disabled was able to form new memories with the aid of an implant.

The way these implants are built is by carefully recording the neural signals of the brain and making a device that mimics the way they work. The device itself uses an artificial neural network, which Berger calls a High-density Hippocampal Neuron Network Processor. Painstaking observation of the brain region in question is needed to build a model detailed enough to stand in for the original. Without neural network techniques (a subcategory of AI) and abundant computing power, this approach would never work.

Bringing the overview back to more everyday tech, consider all the AI that will be required to make the vision of Augmented Reality mature. AR, as exemplified by Google Glass, uses computer glasses to overlay graphics on the real world. For the tech to work, it needs to quickly analyze what the viewer is seeing and generate graphics that provide useful information. To be useful, the glasses have to be able to identify complex objects from any direction, under any lighting conditions, no matter the weather. To be useful to a driver, for instance, the glasses would need to identify roads and landmarks faster and more effectively than is enabled by any current technology. AR is not there yet, but probably will be within the next ten years. All of this falls into the category of advances in computer vision, part of AI.

Finally, let’s consider some of the recent advances in building AI scientists. In 2009, “Adam” became the first robot to discover new scientific knowledge, having to do with the genetics of yeast. The robot, which consists of a small room filled with experimental equipment connected to a computer, came up with its’ own hypothesis and tested it. Though the context and the experiment were simple, this milestone points to a new world of robotic possibilities. This is where the intersection between AI and other transhumanist areas, such as life extension research, could become profound.

Many experiments in life science and biochemistry require a great deal of trial and error. Certain experiments are already automated with robotics, but what about computers that formulate and test their own hypotheses? Making this feasible would require the computer to understand a great deal of common sense knowledge, as well as specialized knowledge about the subject area. Consider a robot scientist like Adam with the object-level knowledge of the Jeopardy!-winning Watson supercomputer. This could be built today in theory, but it will probably be a few years before anything like it is built in practice. Once it is, it’s difficult to say what the scientific returns could be, but they could be substantial. We’ll just have to build it and find out.

That concludes this brief overview. There are many other interesting trends in AI, but machine vision, cognitive prostheses, and robotic scientists are among the most interesting, and relevant to futurist goals.

I would like to thank Michael Anissimov, a fellow transhumanist and author of the Accelerating Future blog, for contributing this piece. 


Filed under Artificial Intelligence

Stop Discriminating the Serene Ideas of Transhumanism

Yesterday the website for Global Future 2045 congress went live and I looked at the Speakers page. Credible researchers, like Dr. George Church, Dr. Marvin Minsky, Dr. Ed Boyden, Dr. Hiroshi Ishiguro and Dr. Peter Diamandis, are mixed with self-realized Siddha masters, jewish mystical meditators and mahatmandaleshwars of the Juna Akhara Order of Hindu monks. This is beyond unacceptable.

This combination of researchers and religious activists will allow the organizers to promote their religious ideas whatever those may be. Substrate-Independent Minds has nothing to do with religion and faith. It is crucial not to transform Substate-Indipendent Minds in some sectarian idea of finding a soul.

Combining religious leaders and credible researchers discredits both the work of those scientists and transhumanist ideas.


Filed under Mind Uploading, Policy

Virtual Humans Can and Will Accelerate Medical Testing

As a matter of fact, virtual humans are already helping to figure out the right radiation dose for pregnant women and defibrillator types for children. Virtualization technologies provide the opportunity to create anatomically accurate simulations of the human body with its organs. Obviously, the physiological properties may not be 100% accurate at the moment, but they sure will be quite soon. Even more, I believe at some point of time in the future we will have sophisticated virtual models of our own bodies, accurate even on the molecular level, depicting all the interactions between the molecules within our cells and tissues. This kind of in silico humans will provide us with the opportunity to test potential drugs for safety and efficiency and also model the aging and pathological processes on all levels in our bodies. This will definitely lead to significant life extension. All we need is accurate mathematical descriptions of biological processes and enough computational resources. Sounds impossible, but I’m quite optimistic, because the technological progress is growing exponentially, hence the probability of gaining sufficient knowledge and computer power within “observable” time frame is not that small.

Read the Wall Streeet Journal article “Scientists Find Safer Ways To Test Medical Procedures”

Leave a comment

Filed under Artificial Intelligence

Randal Koene on Substrate-Indipendent Minds via H+ Magazine

H+ Magazine has got some very interesting articles. One of which I liked in particular since I’m keen on the advances in Substrate-Indipendent Minds, aka mind uploading. Read this extremely informative interview with Randal Koene led by Ben Goertzel. I’d like to just highlight the quote, which is an outline of researchers who are now working in the field of Substrate-Indipendent Minds:

  • Ken Hayworth and Jeff Lichtman (Harvard) are the guiding forces behind the development of the ATLUM, and of course Jeff also has developed the useful Brainbow technique.
  • Winfried Denk (Max-Planck) and Sebastian Seung (MIT) popularized the search for the human connectome and continue to push its acquisition, representation and simulations based on reconstructions forward, including recent publications in Science.
  • Ed Boyden (MIT) is one of the pioneers of optogenetics, a driver of tool development in neural engineering, including novel recording arrays and a strong proponent of brain emulation.
  • George Church (Harvard), previously best known for his work in genomics, has entered the field of brain science with a keen interest in developing high-resolution large-scale neural recording and interfacing technology. Based on recent conversation, it is my belief that he and his lab will soon become important innovators in the field.
  • Peter Passaro (Sussex) is a driven researcher with the personal goal to achieve whole brain emulation. He is doing so by developing means for functional recording and representation that are in influenced by the work of Chris Eliasmith (Waterloo).
  • Yoonsuck Choe (Texas A&M) and Todd Huffman (3Scan) continue to improve the Knife-Edge Scanning Microscope (KESM), which was developed by the late Bruce McCormick with the specific aim of acquiring structural data from whole brains. The technology operates at a lower resolution than the ATLUM, but is presently able to handle acquisition at the scale of a whole embedded mouse brain.
  • Henry Markram (EPFL) has publicly stated his aim of constructing a functional simulation of a whole cortex, using his Blue Brain approach that is based on statistical reconstruction based on data obtained from studies conducted in many different (mostly rat) brains. Without a tool such as the ATLUM, the Blue Brain Project will not develop a whole brain emulation in the truest sense, but the representational capabilities, functional verification and functional simulations that the project produces can be valuable contributions towards substrate-independent minds.
  • Ted Berger (USC) is the first to develop a cognitive neural prosthetic. His prosthetic hippocampal CA3 replacement is small and has many limitations, but the work forces researchers to confront the actual challenges of functional interfacing within core circuitry of the brain.
  • David Dalrymple (MIT/Harvard) is commencing a project to reconstruct the functional and subject-specific neural networks of the nematode C. Elegans. He is doing so to test a very specific hypothesis relevant to SIM, namely whether data acquisition and reimplementation can be successful without needing to go to the molecular level.


Filed under Mind Uploading

DARPA wants machine to suck all your blood out, other fun stuff

DARPA‘s budget for next year includes funding for all kinds of wild new medical technologies for military medicine, from electromagnetic tissue regeneration to a machine that can suck your blood out, clean it, and then fill you back up.

DARPA is making a major push to try to reduce battlefield casualties, and they’re pouring a lot of money into new technologies to help soldiers recover from injury. The blood-sucking machine is part of a ‘Dialysis-Like Therapeutics’ program designed to combat sepsis, which is caused by toxins in the blood. Basically, DARPA is looking for a system that can filter up to 5 liters of blood at a time, identifying and removing bacteria and viruses and poisons and other toxic stuff and then returning clean blood back into the body.

Also on the table are new autonomous diagnostic sensors that can detect both known and unknown diseases and come up with fast and effective treatments, and tissue regeneration technology that uses hordes of individually magnetized cells controlled by electromagnetic fields to encourage natural ‘scaffolding’ to promote the rapid healing of wounds.

One other exciting little nugget that somehow falls under the medical category for DARPA is the creation of artificial eyes that see as well as the biological eyes of animals. From the sound of things, the end result of the Neovision2 program will be little electronic eyeballs that can learn and recognize objects as quickly as we can, that can be tossed into dangerous situations and report back what they see. Plus, throwing disembodied eyeballs around just generally sounds like a good idea and a lot of fun!

Leave a comment

Filed under Article, Artificial Intelligence, Immortalism, Life Extension, Tissue rejuvenation

Boston Dynamics Building Fast-Running Robot Cheetah, New Agile Humanoid

Boston Dynamics, best known for its BigDog bionic beast and other agile machines, is developing two new robots: one will be a super fast quadruped called Cheetah, the other is a freakishly scary full-size humanoid called T-800 Atlas.

The Cheetah robot will have a flexible spine, an articulated head and neck, and possibly a tail. Like the BigDog robot, Cheetah will be able to accelerate rapidly and make tight turns so it can “chase or evade,” the company said in a statement.

In fact, Boston Dynamics says Cheetah will sprint “faster than any existing legged robot and faster than the fastest human runners.”

Continue reading

1 Comment

Filed under Artificial Intelligence, Robotics

The New AI: An interview with Juergen Schmidhuber

Here’s an interview with a very interesting individual Juergen Schmidhuber, a German artificial intelligence researcher who has worked on recurrent neural networks, Godel machines, universal learning algorithms, artificial evolution, robotics, and neural network based financial forecasting. From 2004 to 2009 he was professor of Cognitive Robotics at the Tech. University Munich. Since 1995 he has been co-director of the Swiss AI Lab IDSIA in Lugano, since 2009 also professor of Artificial Intelligence at the University of Lugano. In honor of his achievements he was elected to the European Academy of Sciences and Arts in 2008.

The interview was done by Sander Olson a correspondent for

Question: You have a plan to build an “optimal scientist”. What do you mean by that?

Answer: An optimal scientist excels at exploring and then better understanding the world and what can be done in it. Human scientists are suboptimal and limited in many ways. I’d like to build an artificial one smarter than myself (my colleagues claim that should be easy) who will then build an even smarter one, and so on. This seems to be the most efficient way of using and multiplying my own little bit of creativity.

Continue reading


Filed under Artificial Intelligence