Disclaimer

We use cookies to improve your experience, digital marketing and usage analysis.

Go to privacy

Agreed
News

Artificial Intelligence and Knowledge (By Breogán Gonda)

Sometimes, history seems to repeat itself: today, just like in 1984, there is great momentum around Artificial Intelligence.

“The GeneXus project is the result of our team's imagination and efforts and Artificial Intelligence.
Would it exist without Artificial Intelligence? I think not.”


Breogán Gonda, Chairman of the Board - GeneXus.

 

Artificial Intelligence

Intelligence or knowledge? What is more important? Knowledge is the raw material on which intelligence acts.

If there is no knowledge we can do little with intelligence. However, human intelligence has always helped us to seek the knowledge we need, breaking the vicious circle, what humans have done from the beginning.

It was always a world of scarce knowledge, specially valued. The transmission of knowledge has developed systematically since the Renaissance and focused on teachers. Since the mid-twentieth century a new trend has developed focus teaching on “learning” and not on “teaching”, on the student and not on the teacher.

The advances of this trend were small for many years, because there was no adequate technology to manage, search and  disseminate  large volume of knowledge in a simple and direct way.

Today knowledge is no longer scarce: on the contrary, it is abundant and technology is used so that, even geographical borders, no longer exist for its transmission. Today you can study almost anything remotely.

Within this global and unstoppable process, Artificial Intelligence occupies a prominent place.

But, what is Artificial Intelligence and, fundamentally, what is for?

Sometimes, history seems to repeat itself: today, just like in 1984, there is great momentum around Artificial Intelligence. It seems to many to be a panacea, capable of solving all the problems, and so venture capitalists chase companies in the field to offer them funding.

Why do I refer, to compare with today´s situation to that of 1984?

To make it easier to understand, I'd like to tell a short story.

In 1984, we (Nicolás Jodal and I) were facing the new challenge of corporate computing. It was clear to us that moving from “subject databases” –which had 20, 30 or 40 entities and were accessed by tens of programs– to corporate databases –with hundreds or thousands of entities and accessed by thousands of programs– implied a huge leap forward that would involve difficulties that hadn't been identified yet.

Here I must stress that we weren't speculating about future problems. Quite the contrary, these problems existed at the time and had to be solved in short time frames. Some of our clients had expressed the need to have large corporate databases that could interact with all their systems and provide them, at any time, with enterprise information to support global, tactical and strategic decision-making, in addition to addressing day-to-day operational issues.

The first task at hand was to build the enterprise data model. Given the extent of the problem and our prior knowledge about the high costs of developing, and even higher costs of maintaining large systems, the other task at hand was to find methods and tools that would make it possible to reduce these costs drastically.

After confirming that no tools that could help us solve the problem existed anywhere in the world and that solving it with traditional methods wasn't feasible, we sought to formulate it rigorously by resorting to math and logic.

Thus, we entered the field of Artificial Intelligence, which we hoped would allow us to move forward.

The future of Artificial Intelligence seemed promising. Many companies and research groups, especially in the United States, were trying to build “Expert Systems” to help human experts promptly solve very complex problems from various fields.

Thirty-three years later, most of these endeavors have failed.

At this point, it seems reasonable to ask oneself some questions: Why did so many endeavors fail, even if they were carried out by world-class technicians and scientists? And those that didn't fail, why did they were successful? Isn't it possible that we're making the same mistakes again? Is the so-called Artificial Intelligence a valid solution to real-world problems?

 

The sources of knowledge

When the problem was initially approached, I think we all agreed on one thing: the sources of knowledge for our Expert Systems were human experts in the domain of the problem to be solved. These experts taught us and shared their experiences, and from all that we captured knowledge to build rules, algorithms, etc. that would make up the expert system we were trying to create.

We thought that this source of knowledge was the right one because talking to a human expert is much easier and more pleasant than talking to a computer program. Besides, promising advances were quickly made.

And so it was, at the beginning! However, after the first stage of euphoria, these advances would be much slower to come by. Why?

By examining the issue, we were able to gradually find some of its causes. For example, the human expert has “vast knowledge” but not “all the necessary knowledge.” His or her knowledge is not structured or systematized in any way. The expert “knows how” to use the knowledge by itself or transfer it to a colleague. On the other hand, transferring it to a machine is somewhat different, because it requires objectivity and completeness, which the human expert can hardly provide.

So, to solve the issue of completeness, should we resort to a group of human experts?

The transfer of knowledge involves objectivity problems and errors. At first, while the expert system is small, we're able to identify and solve these problems; over time, when it grows significantly and, in particular, when it involves several human experts, this is impossible.

In sum: we need reliable, objective and complete knowledge. Otherwise, while we think that we're solving our problem, actually we are introducing more subjectivities, inaccuracies, and even falsehoods. That's why we weren't making progress! This way of obtaining knowledge can't be escalated!

Obviously, I'm making this diagnosis 33 years later and with the results in front of me.

Meanwhile, our endeavor was successful. Why? Were we able to see the entire problem clearly back then? Absolutely not!

In our case, which is rather particular, there was a simplifying factor that quickly showed us that we had taken the wrong path. We played two roles: our academic background and professional experience made us “experts in the domain of the problem” and, due to the conditions back then, we were also the “Artificial Intelligence experts.” We were merely making progress by trial and error and learning by doing. The dialog between the two types of experts was very easy, and that's how we realized that capturing knowledge in that way wouldn't take us very far.

The early search for a solution to this problem led us to try to capture knowledge from data, in an automatic manner. It was a huge stroke of luck.

 

Automatic knowledge capturing

During the last 33 years, humankind has made great, productive efforts to automatically capture the data involved in problems and, in particular, the knowledge contained in them.

Are experts in the domain of the problem no longer necessary? That's not true! They play an essential, albeit different, role. They are now required to provide their conceptual knowledge, which is necessary to formulate the problem clearly, to “gross tune,” to interpret the results as we capture knowledge, and to avoid getting carried away by specific case results. Even if the human expert has become a qualified consultant, he or she isn't the source of all knowledge.

In this sense, extensive work has been done regarding two options: Machine Learningand Deep Learning.

In both cases, artificial neurons are used as essential tools, as well as the knowledge obtained in the most automated manner possible from a large mass of data.

The purpose of Machine Learning is for the machine to learn from data and results which are provided to it on a massive scale. It is supervised learning because the elements from which the system should “learn” are selected beforehand.

What is expected of it? That given a set of accurate data and results, the system is capable of reaching the same results: when it does, it will have “learned.”

At first, we will be far from achieving it. So, in some way, we intervene to “help” it by making changes to the system from the outside. It is a slow process; the system “learns” and we humans “learn to help it learn.” This goes on until reaching a stage of maturity and stability, when the system returns results with a high probability of accuracy.

Ideally, these internal adjustments should be automatically made by the system itself. Just like in many other fields, the tenacity of leading groups of investigators has recently allowed them to reach this goal and create Deep Learning. It is also a supervised learning because the data used is chosen and the results are calculated, but the system behaves like a black box that evolves by itself along the learning process.

The algorithms used in Deep Learning are publicly available, and that's why many more people from various fields are now able to use these resources.

 

One example: Artificial Intelligence and GeneXus

The GeneXus project is the result of our team's imagination and efforts and Artificial Intelligence.

Would it exist without Artificial Intelligence? I think not.

Experience has shown that no one, in any organization, has enough objective and complete knowledge about the data so as to use it and automatically obtain the data model we need.

One can wonder: Does this knowledge exist? Where is it?

In every organization, there are multiple views of data that the various users have; also, each one of these users knows very well the views he or she uses every day.

The set of these views is an excellent source of knowledge!

The problem can be formulated in this way: we need a data model that satisfies all these views.

Can there be several of them? If so, which one should we choose?

Actually, the most appropriate model is this set of views. To work with it on a daily basis we also build, through an automatic transformation, the corresponding normalized model. To help its visualization, we can also generate the corresponding Entities and Relationships model, etc.

With this knowledge, we can automatically generate and maintain the systems required by the organization. This knowledge is not based on any physical or technological elements, so it can be used to generate systems for any technological platform. In particular, it can be used for technologies that may arise in the future.

GeneXus is a good example of the continued efforts of a qualified research team that uses Artificial Intelligence methods and tools as essential elements.  

 

The future of Artificial Intelligence: Where is the limit?

I can affirm with confidence that, even though considerable achievements have already been made, Artificial Intelligence is only starting to be used.

For many years, we have used huge efforts, a great deal of energy and keen imagination to obtain data, check its consistency, store it conveniently and keep it consistent, to then exploit it very modestly. With all the resources currently available, it must be used better and more imaginatively.

Every day new applications appear for Artificial Intelligence: artificial vision, recognition of things or people, recognition of patterns, automatic translation, education, entertainment, robots, process automation, diagnostic aids of all kinds, autonomous cars …

Where is the limit? Our imagination is the limit, and since imagination is our best feature, there is no limit!




Read the original article and contact the author here