Say Hello to our Robot Overlords

Michael Francis
3 min readMay 5, 2021
Photo by Morgan Housel on Unsplash

Ray Kurzweil has predicted that by 2030 general Artificial Intelligence will outperform our brain’s DNA-driven structures and be ubiquitous. Let’s put that statement in context today; in 2021, we are on the threshold of self-driving cars on the road. SpaceX both launches, docks, and lands fully autonomous spacecraft regularly. In 2020 SpaceX successfully flew twenty-five orbital missions, two of which carried humans. Last week, Tesla announced that you are six times safer in their car with the self-driving enabled vs. driving yourself on the road and have the data to back this claim. Yet, we still see the press filled with pictures of blown-up rockets and crashed autonomous vehicles.

SpaceX and Tesla are not alone in this, Blue Origin has a near-perfect track record for their launch system, and Waymo has claimed twenty billion miles of simulated and real-world driving experience. Outside of the US, Rocket Labs is now regularly launching rockets with electric pumps and 3D printed engines, and AutoX from China has self-driving robotaxis on the streets of Shenzhen. Behind all these innovations is the math of statistics and simulated intelligence.

What does this mean for us? Common wisdom has it that innovation follows an S curve; initially, we observe slow but accelerating progress; this then switches to a period of exponential growth. The innovation starts to slow down again at the midpoint, eventually tapering off to form the S. Many believe that we are currently in that exponentially accelerating section of the S curve for AI today.

Estimation suggests that simulating the human brain requires a computer operating at 10¹⁶ operations per second. There are supercomputers in the world today that hit this threshold and beyond.

Just a few years ago, we were talking about the inception image recognition model. The publically available recognition model can label multiple items in a scene with accuracy close to that of a human observer; yes, you can fool it; we have all seen the cat labeled a dog. What’s not often talked about is that this model has domain expertise. When most look at a picture of a large Huskey-looking dog, they reply, “It’s a Huskey.” The dog expert answers, “It’s a Siberian Maulmut.” The inception model returns the experts’ answer. Shockingly, it doesn’t do this just in a single domain but cats, birds, and any set of trainable data. It just so happens that there are lots of well-labeled training sets of dog pictures.

The same is true of language models; language to language translation once a pipedream is becoming more and more commonplace. It isn’t perfect yet but far surpasses that of a learner of a second language. Lastly, GPT3, this innovation unleashed on the world in 2020, is possibly the first glimpse of the true power of deep learning models. Given a textual prompt, GPT-3 can create an entire story; in many cases, the generated text passes for that written by a human. You can fool the model, and when scrutinized, the text degrades to nonsense. The question you have to asks is, is this nonsense creativity?

In the next few years, we will continue to see this rapid evolution of AI, we likely will not be able to predict when it becomes a general AI (or sentient), nor will we know at that moment, but looking back, we will be able to say that was the point. There is a chance that this point has already been crossed in a data center somewhere on earth.

--

--