Artificial Intelligence — Death of Digital Computing

Vaibhav Shakkarwal
10 min readMar 7, 2022

The dual nature of light has been debated by scientists over a long period of time. Even Newton believed that light is a beam of particles which are called photons, making this one of the most controversial topics in the study of cosmology. By the end of 19th century, everyone had concluded that Newton was wrong, and light was indeed a wave instead of a beam of particles! That puzzled people even more — if light was a wave, then what was waving? It’s easier to imagine a similar trend between ocean and water — Is Ocean a wave? Or is it the property of water that makes the ocean wave? There was no true answer to the nature of light, therefore it was concluded that light is both, a beam of photons that oscillates, therefore, contributing to its wave nature!

The same can be said about human intelligence — when mimicking human intelligence in our machines and in the rush to create applications with artificial intelligence, we might have missed one minute detail, that unlike our artificially intelligent counterparts, human intelligence is a combination of both digital and analog computation processes. Humans observe via multiple sensory organs and these signals are carried via neurons as electrical impulses. These electrical impulses are processed by our brains — giving us the mind-blowing ability to distinguish between cats and dogs! Maybe differentiation of cats and dogs is an understatement, considering the fact that the human brain has enabled us to send people to the Moon and the fact that we are now capable of migrating human civilization to Mars! Unbelievable right?

“A year spent in artificial intelligence is enough to make one believe in God.”

— Alan Perlis

But what if our approach to mimic human intelligence in the form of Artificial Intelligence started off with a wrong step? What if the purely digital nature of our artificial intelligence algorithms is limiting the overall capabilities of these systems? Are we really on the verge of an intelligent technological outbreak or will we have to go back 40 years back in time to correct out approach? In this article, we will discuss why the end of digital era will give rise to the usage of analog computers.

The Human Anatomy by David Matos @DavidMatos
The Human Anatomy by @davidmatos

Why Analog and not Digital?

You might be wondering what is wrong with current digital technology? The digital technology which enables us to enjoy Netflix on a lazy Sunday afternoon might run out of fashion in coming years? What will replace it?

Analog signal is a continuous signal which represents physical measurements whereas Digital signals are discrete time signals generated by digital modulation. An analog computer can be rearranged to perform various operations. Simple analog computers use voltage, current and resistance to perform calculations — Voltage oscillates in comparison of 1 or 0 in digital Systems. The problem with our current digital systems which are at the lowest level of our machines is that each task is a mathematical operation which is performed by trillions of transistors on a nano level. It takes approximately 50 transistors to add two 8-bit numbers and almost 1000 transistors if we want to multiply 2 numbers.

Analog on the other hand can add 2 numbers by simply considering these 2 numbers as 2 currents and adding those currents to produce the output. Multiplication of 2 numbers is as simple as passing the current via required resistance and the product will be the voltage as per V = I x R relation.

Addition and Multiplication in Analog

PROS of analog computers over Digital Computers

  1. Analog computers are way faster than digital computers
  2. Analog computers require fraction of power when compared to digital computers
  3. Analog computers can perform parallel operations better than their digital counterparts

CONS of Analog computers over Digital Computers

  1. These are not general purposed — you won’t be able to watch Netflix or run Microsoft word over these computers.
  2. These are single purpose — meaning you will have to recircuit them every time you want to use something else
  3. Since input and output are always continuous, therefore the answer is not same every time.
  4. The components like resistance, and wires may have different properties from one another, and it is very hard to find exactly same resistance therefore leading to inexact and inaccurate answer

These are the reasons why analog computers ran out of fashion when digital computers came into picture even when analog computers were superior. In the next section, we will discuss why these computers might be making a comeback.

Analog computers and neural computing

The only reason why analog computers are making a comeback is because of Artificial Intelligence. Artificial intelligence is the ability of a computer or a robot to do task without any human intervention. These tasks might require human intelligence and discernment therefore building systems that have artificial intelligence is a crucial step in the development of mankind.

Artificial Intelligence was first mentioned by Rosenblatt in 1958 (Rosenblatt, 1958). Rosenblatt created the first artificial intelligence tool called perceptron that had the ability to mimic human neuron.

Rosenblatt working on Perceptron 1962
Rosenblatt working on Perceptron 1962

Neurons function in the brain by either activating, i.e., 1 or not activating i.e., 0. Each neuron is connected to multiple neurons and the fact that the output neuron will fire or not depends majorly upon the output of the neurons connected to it. Rosenblatt mimicked this behavior in his creation — Perceptron which had the ability to differentiate between an image of triangle or a circle.

A Neuron
A Neuron

Each neuron has weight associated to the strength of connections. As these connections vary, the weights also vary. To calculate whether a particular neuron will fire, we multiply the activation of each neuron with the associated weight of each neuron — this basically is dot Product of 2 matrices.

Activation (White) and weight (Colored) of each Neuron connected with output Neuron

To build perceptron — Rosenblatt took a 20x20 pixel image. The activation is the brightness, and is not 0 or 1 but a value between 0 and 1. The fact that image is a triangle or circle depends on the fact that the dot product of these numbers is greater than the threshold or bias number. If value is less than the threshold, the image is a triangle, else it’s a circle.

Activation of each pixel with associated weight and dot product of both

But how the perceptron was able to identify whether the image is a triangle or a circle? This was since the model was trained to identify various images. Initially, all the weights were set to 0. The dot product of activation and weights was taken and if the output was right, then no changes were made. If the output was wrong, the weights were modified, and input activation was added or subtracted to the weight. With enough iterations and training, the perceptron was able to accurately identify whether the image was a square or a circle.

The problem — hardware or software?

As technology progressed, various advancements were made in the artificial intelligence domain — the greatest as of time being in 1980 by ALVIN, a self-driving initiative by Carnegie Melon University. The model was somewhat similar to perceptron but had a series of hidden neural network layer between input and output. This training of neural network was coined as back propagation and is still the foundation of modern neural network and machine learning.

The Main problem was that we still didn’t know what is limiting the capabilities of our neural networks and artificial intelligence– our hardware or software, and with this question, Fei-Fei Li created image net in 2002. Image net was the biggest collection of 1.2 million human labelled images. The theory was the fact that it is the lack of training due to which the accuracy of these AI models was low, therefore, in 2010, the large scale recognition challenge was launched to promote researchers around the world to work on image recognition algorithms to improve their accuracy.

As the years progressed, the error rate was dropping exponentially. This was due to the fact that Fei was right — The models increased the hidden layers and trained the model to an extent to exponentially decrease the error rate. AlexNet, a model built by Researchers in University of Toronto used 8 layers of neural networks, with over half a million neurons and 60 million weights, overall performing 700 million math operations!

Top 5 error rate — (Kang, 2020)

They used GPU instead of normal computers which decreased the time required to perform these calculations. As the years progressed, the error rate decreased to the point where these AI models are better at identifying images with 3.6% error where humans have almost 5.5% error rate.

What is the limiting factor?

Beyond 2017, the accuracy surely improved but to a certain point, the performance stagnated as the error rate was not decreasing — This was majorly due to 3 reasons:

  1. Energy — A lot of energy was required to train these models and GPU were surely taking a lot of energy and were expensive
  2. Von Neumann Bottleneck — The fact that numbers are fetched via buses to perform operations utilizes more energy than the operation itself bottlenecks the whole process beyond a point. This can be resolved via using better and fast transistors.
  3. Moore’s Law — The size of transistor is reaching its limits therefore creating more problems.

With these problems in our digital systems, we really started to question whether the digital computers will ever be able to overcome these problems. The fact that Analog computers can resolve these problems is astonishing! In the next section we will find out how a company is using Analog computing to resolve these issues.

Mythic AI

Mythic AI is a technological marvel that has revolutionized the concept of analog computing. Mythic AI makes Analog chips to perform matrix multiplication.

Mythic AI Analog Matrix processor

The smallest units in our hard drives are called flash storage cells. These cells store the electrons in form of 1 or 0 digital bit that makes up our data. These Flash storage cells can retain their charge for decades, thus, making the digital systems so reliable.

Mythic AI repurposes these Digital Flash Storage cells and use them as variable resistors. These variable resistors can perform analog calculations when current is passed through them — I = V/R. The Resistance can be considered as conductance which is R = 1/G, therefore, making the current produced as a matrix multiplication of Voltage and Conductance.

How better are these Analog Chips?

In comparison to heavy GPU’s, the Mythic AI chips can perform almost 25 trillion math operations per second and consume only up to 3 watts of power. The heavy GPUs on the other hand can perform from 25 trillion to 100 trillion operations and consume from 50 to 100 watts of power. The application of the GPU requires extensive power, and relevant cooling, also the output of these systems is not as efficient as the Mythic counterpart.

Photo by Sumeet Singh on Unsplash

Just look at the size of GPU! Also, with advancements in analog technology, the efficiency and size of chips will be improved dramatically over the coming years.

Applications

The application of these analog chips is unparallel. The chips can replace heavy equipment, from crypto mining to voice assistants! Here are some examples of analog computing in AI.

  1. Medical Imaging — AI can detect certain tumors in various parts of body via medical image recognition techniques but require a huge amount of computing abilities and a lot of iterations. This result is extensive consumption of both power and time. With the advancements in Analog hardware technology, the heavy GPUs can be replaced with relevant Analog technology to save both power and time while keeping the same accuracy.
  2. Voice Assistants — Voice assistants use voice recognition algorithms that require real time computing. The neural networks in voice assistants like Alexa and Siri are highly trained. The remote Analog computation can surely increase the overall accuracy of these assistants.
  3. NLP — Natural Language processing can be improved by Analog computing
  4. Neural Networks — Every application that comprises of Neural Networks can be enhanced via Analog computing.
  5. Virtual Reality — VR headsets can offer a better experience with Analog computing to give a more immersed metaverse experience.
  6. Augmented Reality — With Apple and Google working on smart glasses that will create a new era of Augmented Reality, we might witness Analog computing in 2nd or 3rd generation of these AR devices.
  7. Crypto and Hashing — Crypto Mining and hashing requires huge energy and time. This energy crisis can be resolved by using Analog computing in the next generation of GPUs.

Potential is technically limitless!

Conclusion

To Conclude, 21st century might witness the end of the digital era, and the Analog technology might conquer the next century. This doesn’t mean that this is the literal end of Digital technology, but I believe that this is the starting of next generation of Artificial Intelligence and computing. Like the dual nature of light, Artificial Intelligence must utilize the capabilities of both Digital and Analog computing to produce better and accurate results.

With the metaverse on the verge of the next technological revolution, Analog computing might play a crucial role in determining the fate of what metaverse can achieve. The transition of internet from 2-dimensional to 3-dimensional interface, Analog computing can resolve the issues associated with rapid computing giving rise to next level of Virtual and Augmented Reality experience. Like I say, the potential is technically limitless!

References

Rosenblatt, F. (1958). The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain. Retrieved From: https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.335.3398&rep=rep1&type=pdf

Imagenet. (2022)Retrieved from: https://www.image-net.org/challenges/LSVRC/index.php

Moore’s Law. (2018). Retrieved from: https://www.nature.com/news/the-chips-are-down-for-moore-s-law-1.19338

Mythic Ai. (2022). Retrieved from: https://www.mythic-ai.com/

Kang, Dae-Young & Duong, Hieu & Park, Jung-Chul. (2020). Application of Deep Learning in Dentistry and Implantology. The Korean Academy of Oral and Maxillofacial Implantology. 24. 148–181. 10.32542/implantology.202015.

--

--

Vaibhav Shakkarwal

I'm a data-driven software wizard who loves to turn complex problems into elegant solutions. Find more about me on ivaibhz.com