#AI in short:
Giving a machine ability to..
- Continuously receive information from its environment, frame by frame, one moment to another; listening to or sensing information of space, objects & their interactions (creating BigData);
- Gain awareness of its environment – ability to identify objects by their properties & behavior, continuously self-learn by observing changes in their properties & behavior or be programmed to learn to deal with new information (ML).
- Gain ability to predict possible outcomes for any given moment & their probabilities, % chances of each outcome happening in order to be able to make a decision (ML using Statistics);
- Take a human like action in response (Robotics).
Sometimes it’s the hardest to fathom what the human brain can do and articulate human intelligence. An Artificially Intelligent machine, in short mimics human intelligence – constantly observing the environment, interpreting information, predicting outcomes, making decisions and taking actions.
An artificially intelligent machine would have at minimum have 3 components:
1) Artificial sense organs
2) Artificial processing unit & artificial neural network
3) Artificial motor organs
- Artificial sense organs (to mimic the human sense organs)
Human body uses 5 sense organs to receive information. Machines use artificial sensors (to receive information – be it in the form of light patterns, sound waves, heat waves, pressure / vibrations, displacement / rotation etc). All of this analog information (waves) is then converted to a digital code, with a pattern of 1s & 0s to be fed into a machine to make a decision. #DataAcquisition
Businesses that are into making photo sensors, sound sensors, heat sensors, gyrometers etc continuously feeding information to develop machine intelligence are all into the business of AI. Anything observed in real-life continuously for a period of time (like you and me), contributes to ‘big data’, creates opportunities to analyse behaviors and predict future.
- Artificial processing unit & artificial neural network (to mimic the human brain & the real neural network)
Humans can identify an object by way of how they appear, how they sound, how they feel etc. Machines are trained to identify an object (it’s properties & behavior) by feeding in data humans use for identification of objects in their reality (ML). Every sensory input signal to our brain, can have an equivalent digital signal that can be fed to a machine’s artificial brain. The artificial neural network, mimics the decision making process (decision tree) of the natural neural network in the brain. #Algorithms #DecisionTrees
This also means, all of the AI we build, can only be intelligent for a world as described by humans, of our collective experience of this reality.
Humans predict outcomes, assessing what’s most likely to happen in a given circumstance, then make a choice / decision & take action.
Eg: When a human sees a kitten crossing a busy road, a set of neurons fire up in the brain processing that input information, making a decision, sending signals to a specific output organ to take action. In a machine, a set of artificial neurons light up inside it’s silicon brain chip to send a signal to the robotic motor organ to take action.
Machines are programmed to process input information, predict possibilities using statistics – mathematical techniques to deal with known information, unknown information, assumptions, and predicting % chances of possible outcomes. Human’s use both mathematics & intuition (gut feeling).
- Artificial motor organs (to mimic the human motor organs)
Human body uses it’s motor organs (mouth, hands, legs, face etc) to respond to a real word moment and continuously interact with it’s environment. Machine is given few artificial substitutes – like a robotic arm to take action (eg: a robot taking an object from one point to another.)
Real & digital environments
AI/ML for the Digital world: Artificial Intelligence is not only about making sense of the real world objects, physical spaces & physical human interactions with them. It also involves understanding virtual environments & observing human interactions with a virtual world and predicting new digital behaviors.
For example an eCommerce store, may observe every moment of a user on a website, historize & analyze behavioral information and predict likelihood of the user taking certain actions. This eCommerce store may also have a web camera for electronically knowing the customer (eKYC), reading the facial expressions, as the user is looking through the products and have a chat bot pop up to start a conversation – “Hey, you seem to like red, party wear dresses.. how about these?”.
Machine Learning for an eCommerce store, would mean observing objects (digital user profiles), their behaviors (digital user interactions), observing how these online behaviors change with changing online market environment (prices, reviews), observing how these online behaviors change with changes in external environments (recession, climate change), predicting the likelihood of online sales happening in the next 3-4 weeks.
ML requires training the machine with large tesseracts of data, of users, choices & actions, until a point the machine can learn on it’s own and deal with new, unknown information.
To test a machine’s intelligence, and how accurate your ML’s predictions are some data is kept away, hidden from the machine, just so we can test how the ML’s results change with exposure to new / unknown information. Eg: Let’s say an ML program predicts a 75% likelihood of a patient not returning after a surgery, assuming recovery with medications. Meanwhile a new virus has come into existence, that affects patients in post surgery conditions. The ML program is not going to be aware of this new information until we feed it and train it to incorporate this new variable in it’s prediction.
Connecting the dots..
Data Science, today is a field of study unifying these various specializations like Data acquisition (sensors / listeners), Big data analytics (data mining, analysis), Decision trees & Algorithms, Machine Learning & Predictive modeling. Each of these subjects, together as a unified whole intend to help us better understand our reality, better model the world we live in and ultimately enhance our experience of life.
In case of the human body, Sensory perception > Information processing, outcome prediction & decision making > taking action all happen incredibly fast, like in a micro nano second. Each of these steps, is today a field of research, study & specialization and have given rise to a variety of professions in each of these.
In all this, what’s Quantum Computing?
From a single moment in the real-world, before we make a choice, there are infinite possible outcomes (think parallel universes or multiverses). When we make a choice in the moment, as we’re taking an action, we’re reducing the number of possible outcomes, eventually narrowing down the outcomes from infinity to one, creating the next moment.
If we have to simulate a real-world moment in a computer, to see what all possible scenarios can lead to the best possible outcome, it’s practically impossible for a normal computer to do this, as it can handle only one possibility at a time. Eg: If x = 10, then y = 70%. A quantum computer can compute all possible values for x at once, as it exploits quantum behaviors of sub-atomic particles. This behavior in Quantum mechanics is called “Superposition”.
Weird? That’s why it is said – if you’re not disturbed by Quantum mechanics, you haven’t really understood it.