« Artificial Intelligence, for and by real people
It’s not easy to define artificial intelligence, let alone understand how it is going to affect our future. If it is a foreign concept to the vast majority of society, how can we expect to control how it influences us? In fact, the definition of plain old intelligence itself is still subject to debate.
What makes a thing intelligent?
Philosophy aside, for this discussion, let’s define intelligence as the capacity to learn: the ability to acquire and apply knowledge to new problems. The greater this ability, the higher the intelligence. So what makes something artificially intelligent? “Artificial intelligence” is often coupled tightly with computer science, a term used as a catch-all for systems that defy our expectations of what computers can, and cannot, do. If the general consensus is that a task requires “human intelligence,” then a computer system that can complete the task reasonably well must, therefore, be artificially intelligent. Consensus notwithstanding, the trouble with this definition is that as our understanding of what computers can do continues to accelerate forward, so does our definition of artificial intelligence.
Today, our understanding of AI is centered around the leading edge technology and how large corporations have decided to package it. The most familiar forms of AI are personal assistants such as Amazon’s Alexa and Apple’s Siri because they have been made to look simple and easy for anyone to use. Other outstanding examples include an implementation of IBM’s Watson that won Jeopardy! and self-driving cars competing in DARPA challenges and, of course, Tesla Autopilot. As technology advances, we grow more dependent on new innovations and often lack the perspective of how they affect, and change, our everyday lives. We get comfortable with a version of AI helping us perform everyday tasks, but how long will they be around before they are no longer considered intelligent? Are these labels of AI temporary? As the AI revolution continues, the roles in society filled by computers will expand to scopes that seem unrealistic today, and consumer products like Siri and autonomous vehicles will become a part of what we understand to be the norm.
So the question remains, will today’s technologies cease to be intelligent as the state of the art advances far beyond them?
AI is hard to understand when the technology associated with it is seemingly futuristic and the industry around it is advancing at a rapid rate. We're surrounded by systems that were once considered cutting-edge, but have now been relegated to the category of things that simply “are.” Let’s take a step back and look at AI as a concept instead of a definition and include the technology that is now considered antiquated as a part of it to get the full understanding. For example, the modern PID controller (which dates back to the 1930s, and was described as early as the 1860s), could be argued to be intelligent, as defined above (despite it predating the computer science era). In practice, these program loops are used to control changing processes in a controlled, progressive manner similar to that of a human. The most relatable explanation (and also one of my favorites) to the intelligence behind a PID controller is the battle of setting the temperature in the shower to the perfect level. Setting the temperature of your own shower might begin with a guess of how far to open each tap, waiting a moment, then checking the temperature. However, when using an unfamiliar shower, until you check the temperature you have no idea how much rotating the tap handles affected the temperature. Once you check the water you know how far this temperature is from your preference, and you also know how far you turned the taps to get there. Without even thinking about it consciously, the intelligent system of “you” suddenly “knows” how to adjust the taps to get the temperature right. Each step of trial-and-error the temperature gets much closer to its target, and the corrections made get proportionately smaller, and ever more accurate. And the second time you use this shower, this process of guess-and-check begins with a much smaller error, as you’ve learned the relationship between the taps and the water temperature. Each subsequent trial, your initial performance is again improved based on what you learned during the previous iterations. That process is provably intelligent: a biological computer has learned a lesson and applied it to a variable situation.
So considering that a common feedback controller program can achieve those same results, shouldn't it also be considered intelligent? If you look back to the end of WWI such systems had been realized as mechanical, pneumatic-powered devices tasked with jobs that, until that point, had required a human. While the label of AI didn't exist yet, it may have been appropriate considering the technology of the time, however, it now seems preposterous to group such a system with an autonomous vehicle. And that’s really the point: artificial intelligence is a spectrum, and systems do not cease to be intelligent as that spectrum expands. Considering this, it is apparent that intelligent systems may employ biological brains, ones and zeros, pressure differentials, or any other means to complete their tasks. But to be clear, the differentiating factor is not necessarily the complexity of the task but the application of gained knowledge to that task by the system defining its characteristic of “intelligence.”
Artificial Intelligence Spectrum
Artificial intelligence is a spectrum, and systems do not cease to be intelligent as that spectrum expands.
Machine learning and AI are often grouped together and thought of interchangeably, which I would argue contributes to the confusion around AI. Machine learning is a way for developing an intelligent system automatically, with or without human supervision. The field of machine learning has introduced a concept of self-taught machines, wherein a system observes its own performance and attempts to compensate for error. The actual processes of learning vary, but can often be thought of similar to how a child may learn concepts of colors, shapes, or touching hot stoves. By repeated exposure to a color, and its name, the child’s brain forms a connection, a kernel of knowledge, connecting the label to the observation. Feedback about actions taken, such as touching the stove, can be represented to the learning mind as punishment (such as pain from a burn) and reward in processes of reinforcement learning.
AI regardless of how it is defined or how it advances will play a role in our future. It’s an extremely powerful tool that has many uses and benefits but is obfuscated by the illusion that it's impossible for the everyday user to utilize or understand. Some 30 years ago the personal computer revolution took off, and an influx of enthusiasts began applying tools they had used in school and work to their personal lives. This environment for ideation and experimentation further accelerated process as more and more of the workforce adopted the new technology. The Internet then ushered in a new era of connectivity and even more ways to use computers in everyday life, until eventually, they found homes not only in our pockets and on our desks, but in almost everything around us. The availability and ubiquity of computers have been a major catalyst in their development and progress, and the same can be true of AI and machine learning. The impressive feats of AI may make it seem completely foreign, and that these tasks are far beyond what a normal person can do with a computer. This is especially true if you are not a programmer, and even to the programmer, the learning curve for resources such as TensorFlow can be quite steep. These processes of ideation and exploration are necessary to true innovation and advances in science, and coding does not need to be an impediment to creativity.
"AI regardless of how it is defined or how it advances will play a role in our future."
If we go back to my point that intelligence is a spectrum and think about how quickly machines and technology are advancing, it is conceivable that the vast majority of humans won't be able to adapt to this rapid advancement. Soon, those who are not technically above-average, or advanced for that matter, maybe the next intelligent thing to be thought of as ordinary. Low coding tools such as the nio System Designer and Node-RED enable people to break this barrier to entry and allow access to the most complex areas of computer science to anyone regardless of their technical proficiency. They empower everyone with a computer to apply cognitive solutions to their own experiences, whether a traditional control problem or a more complex machine learning algorithm, it all begins with the same fundamentals. Many of these fundamentals are simple logical operations and arithmetic, that need only be connected together in the right way to realize genuinely intelligent behavior. Others can be intelligent on their own and need only to be connected to the rest of a solution. Traditionally this is a programming challenge, most resources exist as code and learning resources begin at the college level.
These barriers to entry are not necessary, and in many cases are an impediment to solving what should be a simple problem, or simply exploring a new concept. Since the industrial revolution, we have been finding ways to automate tasks, with increasing complexity, freeing human minds from the drudgery of a daily grind to imagine and innovate, to affect their own future. And the world of AI is no different, it is another tool to free us from tasks that, until now, have required a human. If the people are empowered to leverage these tools and utilize the information generated from and all around themselves, the future of AI will look less like corporations hawking gadgets, and more like individuals improving their own experiences. The AI revolution probably will not be an uprising of machines, but an uprising of ideas and human knowledge.
I think that people are the driving force of innovation, so we shouldn't be "adapting" to it, but "driving" it.
About the Author
Tom Young
Application Engineer at niolabs
If you broke it, Tom can fix it. If you dream it, Tom can build it. Bonus fun fact: He also loves Jurassic Park.
Email: tyoung@n.io
Github: https://github.com/tyoungNIO