ARTIFICIAL INTELLIGENCE
What is it?
It is the most important revolution in technology since computing was invented. Today, artificial intelligence (AI) is one of the most thought-provoking topics in the field of technology and business. This enthusiasm has foundations: we live in an increasingly connected and intelligent world. Today, you can build a car or compose Jazz using an algorithm. The technology behind all these advances is related to Artificial Intelligence (AI). It makes it possible for machines to learn from experience, adjust to new inputs, and perform tasks like humans. Most of the examples of artificial intelligence you hear about today - from computers playing chess to self-driving cars - rely heavily on deep learning and natural language processing. Using these technologies, computers can be trained to perform specific tasks by processing large amounts of data and recognizing patterns in the data. In short, AI is the scientific field of computing that focuses on the creation of programs and mechanisms that can display behaviors considered intelligent. In other words, AI is the concept that "machines think like human beings."
History
The term artificial intelligence was adopted in 1956, but it has become more popular today thanks to increased data volumes, advanced algorithms, and improvements in computing power and storage.
Initial research on artificial intelligence in the 1950's explored topics such as problem solving and symbolic methods. In the 1960's, the United States Department of Defense showed interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) conducted street mapping projects in the 1970s. And DARPA produced smart personal assistants in 2003, long before Siri, Alexa or Cortana were common names.
This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and intelligent search systems that can be designed to complement and augment human capabilities.
Although Hollywood movies and science fiction novels depict artificial intelligence as human-like robots taking over the world, the current evolution of AI technologies is not as scary - or this smart. Instead, artificial intelligence has evolved to bring many specific benefits to all industries.
AI is not new, 2,300 years ago Aristotle was already trying to make rules the mechanics of human thought, and since the time of Leonardo Da Vinci, sages have tried to build machines that behave like humans.
In 1769 an automaton called The Turk, built by the Austrian engineer Wolfgang von Kempelen, visited all the European courts, challenging anyone who dared to play against him to chess. He played against Napoleon, against Benjamin Franklin, against chess masters, and he beat them. Years later it was discovered that The Turk was run by a human hiding inside the gaming table. Mirrors placed in the automaton's eyes allowed him to see the board, and thanks to ingenious clockwork mechanisms they could control the automaton's hand to move the pieces around the board. Up to 15 chess masters handled The Turk, the most famous of which was a dwarf named Tibor Scardanelli, who could easily fit inside the table and was also an extraordinary chess player.
The Turk was not artificial intelligence, but it shows us how the desire to build intelligent machines is not a concept of our time.
We had to wait until 1936 for the modern artificial intelligence process to begin. It was basically invented by Alan Turing, the mathematical expert who deciphered the secret Nazi codes of the mythical Enigma machine. It brought forward the end of World War II by two years, since the Allies could read the secret messages of the Germans.
In 1936 Alan Turing published his concept of a universal machine, which basically described what a computer algorithm, and a computer, was. In 1950 he formalized the beginning of Artificial Intelligence with his Turing Test, a test that defines whether or not a machine is intelligent. If a human and an AI are faced with questions from an interrogator and that questioner cannot distinguish whether the answers are coming from the human or from the AI, then the AI is intelligent.
Initial research on artificial intelligence in the 1950's explored topics such as problem solving and symbolic methods. In the 1960's, the United States Department of Defense showed interest in this type of work and began training computers to mimic basic human reasoning. For example, the Defense Advanced Research Projects Agency (DARPA) conducted street mapping projects in the 1970s. And DARPA produced smart personal assistants in 2003, long before Siri, Alexa or Cortana were common names.
This early work paved the way for the automation and formal reasoning that we see in computers today, including decision support systems and intelligent search systems that can be designed to complement and augment human capabilities.
Although Hollywood movies and science fiction novels depict artificial intelligence as human-like robots taking over the world, the current evolution of AI technologies is not as scary - or this smart. Instead, artificial intelligence has evolved to bring many specific benefits to all industries.
AI is not new, 2,300 years ago Aristotle was already trying to make rules the mechanics of human thought, and since the time of Leonardo Da Vinci, sages have tried to build machines that behave like humans.
In 1769 an automaton called The Turk, built by the Austrian engineer Wolfgang von Kempelen, visited all the European courts, challenging anyone who dared to play against him to chess. He played against Napoleon, against Benjamin Franklin, against chess masters, and he beat them. Years later it was discovered that The Turk was run by a human hiding inside the gaming table. Mirrors placed in the automaton's eyes allowed him to see the board, and thanks to ingenious clockwork mechanisms they could control the automaton's hand to move the pieces around the board. Up to 15 chess masters handled The Turk, the most famous of which was a dwarf named Tibor Scardanelli, who could easily fit inside the table and was also an extraordinary chess player.
The Turk was not artificial intelligence, but it shows us how the desire to build intelligent machines is not a concept of our time.
We had to wait until 1936 for the modern artificial intelligence process to begin. It was basically invented by Alan Turing, the mathematical expert who deciphered the secret Nazi codes of the mythical Enigma machine. It brought forward the end of World War II by two years, since the Allies could read the secret messages of the Germans.
In 1936 Alan Turing published his concept of a universal machine, which basically described what a computer algorithm, and a computer, was. In 1950 he formalized the beginning of Artificial Intelligence with his Turing Test, a test that defines whether or not a machine is intelligent. If a human and an AI are faced with questions from an interrogator and that questioner cannot distinguish whether the answers are coming from the human or from the AI, then the AI is intelligent.
- Artificial intelligence automates repetitive learning and discovery through data. Artificial intelligence is different from hardware-based robot automation. Instead of automating manual tasks, artificial intelligence performs frequent, high-volume computerized tasks reliably and without fatigue. For this type of automation, human research is still critical to setting up the system and asking the right questions.
- AI adds intelligence to existing products. In most cases, artificial intelligence will not be sold as a standalone application. Instead, the products you already use will be enhanced with artificial intelligence features, much like Siri was added as a feature to a new generation of Apple products. Automation, conversational platforms, bots, and smart machines can be combined with massive amounts of data to improve many technologies in the home and workplace, from security intelligence to investment analytics.
- Artificial intelligence adapts through progressive learning algorithms to allow data to do programming. Artificial intelligence finds structure and regularities in the data so that the algorithm acquires a skill: the algorithm becomes a classifier or predictor. In this way, just as the algorithm can learn to play chess, it can also learn which product to recommend next online. And models adapt when new data is provided. Backpropagation is an artificial intelligence technique that allows the model to make adjustments, through training and aggregated data, when the first answer is not entirely correct.
- Artificial intelligence analyzes more data and deeper data using neural networks that have many hidden layers. Building a fraud detection system with five hidden layers was almost impossible a few years ago. All of that has changed with incredible computing power and Big Data. A lot of data is needed to train deep learning models because they learn directly from the data. The more data you can provide them, the more accurate it becomes.
- Artificial intelligence achieves incredible precision through deep neural networks - which was impossible before. For example, their interactions with Alexa, Google Search, and Google Photos are all based on deep learning - and they keep getting more accurate the more we use them. In the medical field, the artificial intelligence techniques of deep learning, image classification and object recognition can now be used to detect cancer on MRIs (magnetic resonance imaging) with the same precision as highly trained radiologists.
- Artificial intelligence gets the most out of data. When algorithms are self-learning, the data itself can become intellectual property. The answers are in the data; you just have to apply artificial intelligence to bring them out. Because the role of data is now more important than ever before, it can create a competitive advantage. If you have the best data in a competitive industry, even if everyone applies similar techniques, the best data will win.