top of page

Artificial intelligence

Diagnosing diseases based on genetic information, medical histories and exams. Making predictions based on a large volume of data. Finding the suspect of a crime with facial recognition. Carrying out financial analysis and assessing a company's risks based on big data queries. All these tasks, previously done by us humans, can now be performed with artificial intelligence, AI. But, how can we define this technology that is changing our world and promises to transform our society even more?


Well, there are several definitions of AI. It has been described as the activity dedicated to making machines intelligent, and intelligence is the quality that allows an entity to function properly and predictably in its environment. Although we do not have an exact definition, we can say that AI involves computational technologies that act inspired – even if they act differently – from the way that human or other biological beings feel, learn, reason and make decisions. A simpler description would be: it is a multidisciplinary area whose objective is to automate activities that require human intelligence.


If we think about it, not long ago, AI was seen as science fiction, its history is intertwined with that of computing itself. Alan Turing, known as the father of computing, was one of the pioneers in the area. The development of AI began in the 1950s. In 1956, a group of researchers, including Nathan Rochester, from IBM, Claude Shannon, the father of Information Theory, and John McCarthy , met at a conference on the Dartmouth College campus in the United States. At this meeting, McCarthy coined the term artificial intelligence, defining it as “the science and engineering of producing intelligent machines”.


From this event, the first searches and results using AI began to emerge. In 1959, the term machine learning appeared for the first time to describe a system that allowed computers to learn a function without being directly programmed for it. In a simple way, the machine after learning, which would provide input data for an algorithm, would be able to perform the task automatically.


Decades later, however, it was observed that advances were not happening at the speed imagined. The pioneering AI researchers believed that, in a maximum of one generation, machines would have the same intellectual capacity as humans, which did not happen. This frustration brought periods of low investment in the area, which were soon resumed, often driven by technological developments in computers.


The Machine's Victory

In the 1990s, with the emergence of the commercial internet, AI received a new impetus when it was used for the development of navigation systems. The prototype of what would be Google today emerged at that time as a tool based on programs that analyzed network data and classified them into predetermined interest groups. Also during this period, a machine was developed to play chess: Deep Blue was capable of analyzing all the possibilities and, thus, predicting responses and the best movement of the game pieces. It won a match against the world champion, the Soviet, Garry Kasparov. It was the first time that a machine defeated a human, but in the series of games the chess player won.


From then on, there were many technological advances that caused an exponential increase in artificial intelligence applications. The automation of processes, which are operated by “robots” in the industry; intelligent systems analyze images to recognize patterns and assist clinicians in making diagnostic decisions; personal assistants like Siri and Alexa that interact with the smartphone user; digital games that learn the player’s behavior; autonomous vehicles and many other technologies that are part of our daily lives.


The Golden Age of A.I.

But what is turning this beginning of the 21st century into the “golden age of AI”? It is possible to list some aspects:

1 – High connectivity: not only is the world society more digitally connected, but also the machines, through different sensors.

2 – Low computational cost: more and more electronic “chips” with processing capacity equal to or greater than previous models are launched and with ever lower costs.

3 – Large amount of data (Big Data): we live in a world where the amount of data as a source of information has increased exponentially. Processing this data and extracting relevant content has been a major challenge, and AI has proven to be a great tool for this.

4 – Machine Learning: Machine learning is based on algorithms and mathematical models to extract or recognize hidden patterns in a dataset. This ability to associate new data with learned patterns allows these machines, for example, to identify objects in images or videos.


What is the difference between AI, machine learning and deep learning?

With the advancement of artificial intelligence in our daily lives, two other terms also arise: machine learning and deep learning. It is common to consider that the three are synonymous, but this is not the case.


Artificial intelligence is the ability of a machine to act inspired by the behavior of humans and other biological beings. AI is the broader term, meaning all machine learning is AI, but not all AI is machine learning.


But what is machine learning anyway? This subset of AI is the ability of a machine to learn, as the name implies, from a large amount of imputed data. With all this information received, models capable, for example, of making predictions are built, with minimal errors in some cases. As an example, we can cite the use of machine learning techniques to predict stock prices in the stock exchange market. This prediction model, based, for example, on neural networks, could be able to indicate the closing value for the next day of the most important securities traded on a stock exchange.


Deep learning is a subset of machine learning and also the technology that makes it applicable. Based on neural networks inspired by the cognitive capacity of the human brain, deep learning dispenses with a data pre-processing step – which is mandatory in machine learning – and is able to interpret data received in a primary way. Systems or machines based on deep learning are able to learn and improve the more they are exposed to use. Some examples are real-time facial recognition, internet search engines and chat-bots.


Who will come first in the race to dominate AI in the world?

A worldwide race is underway to lead research and development in the field of artificial intelligence. Technological disputes are common in the history of science, and one of the best known is the race for the conquest of space, in the mid-20th century, between the former Soviet Union and the United States.


In the race for AI dominance, two competitors loom far ahead of the rest: the United States and China. It is still unclear how this dominance will actually manifest itself, but based on the past, it is possible to infer that technological superiority is directly reflected in economic, political and military power.


In the US, for example, in July this year, the White House issued an executive order for the NIST (National Institute of Standard and Technology), an institute similar to Inmetro in Brazil, to conduct research and development in AI with the aim of making the country become a leader in this area, preventing Chinese dominance.


Leadership in AI is highly strategic. Imagine what it would be like if a country dominated all the raw information in the world and was the only one capable of processing this data? It would be a distance equivalent to the one that exists between countries that export their gross wealth at low prices and then have to import their derivatives at high costs, as they do not master processing technologies.


Another strategic area is related to defense and cyber attack. Imagine, for example, if an attack manipulates and alters sensitive information, such as financial data. The consequence could be to implode a country's entire economic system.


For countries that are behind in this race, like Brazil, a way to minimize technological backwardness, perhaps, is to create mechanisms or laws that toughen the external imposition of AI technologies with the potential to compromise, for example, the economy. A better solution would be to accelerate development in the area, but this will not be possible for many nations.


A risk to humans?

In the face of so much evolution, what positive or negative impacts will these changes brought about by AI have on society? Will there be fewer jobs, with machines replacing humans? This analysis is neither simple nor definitive, since we are in the midst of a transformation process. But it is possible to point out some signs of what will probably happen in the future.


First, it must be seen that AI is part of a larger context resulting from the digital transformations that are already impacting the world's economies. Within this scenario, changes are expected in employment and work relationships. According to a study by the Ministry of Science, Technology, Innovations and Communications, there is a tendency to separate activities with automatable tasks and to value human activities. With this, it is expected that the worker of the future will be responsible for managing risks, strategies and operations of their activities. Thus, those more repetitive and mechanized actions will probably be taken over by intelligent machines. In this future, a study shows that, work relationships should be more horizontal, replacing the vertical line "boss-employee", and professionals will have a more autonomous role in work and in the production of value.


Another important aspect is the use of these new technologies to improve society's quality of life. According to the MCTIC study, digital transformations, including AI, can be used to fight hunger, increasing, for example, agricultural productivity, reducing losses in the field and in distribution logistics. They can also reduce the impact of climate change, through a network of intelligent sensors that, associated with AI techniques, mitigate or prevent natural disasters.


The discussion about what our future with artificial intelligence will be like is not limited to these topics. There is much more to ponder about its positive and negative implications. But the first step is to know and disseminate technology to society so that it is used in the fairest possible way.

Comments


bottom of page