Skip to content
blog-KI
Olaf SassAug 10, 2023 5:35:17 PM5 min Lesezeit

A brief explanation of artificial intelligence (AI)

AI - One of the most exciting topics today

What is Artificial Intelligence?

Let's just ask the AI (here ChatGPT):

Artificial intelligence is a branch of computer science that deals with the development of computers and systems that can replicate or simulate human-like intelligence. AI systems use algorithms and techniques to learn from data, make decisions, solve problems, and perform tasks that normally require human intelligence.

There are several types of AI:

Weak AI (also known as Narrow or Weak Artificial Intelligence (AI)): This form of AI is limited to a specific task or problem. Examples include speech recognition systems, facial recognition, or recommender systems. Although they can perform complex tasks, they are unable to operate outside of their specialized field.

Strong AI (also known as General or Strong AI): This type of AI aims to achieve a general intelligence comparable to human intelligence. It should be able to handle a wide range of tasks and react flexibly to new challenges. Strong AI systems would theoretically be able to demonstrate human-like cognition, awareness, and problem-solving capabilities.

AI is used in many fields, such as self-driving cars, medical diagnosis, finance, robotics, language translation, and more. Advances in AI have the potential to revolutionize our society and economy in many ways, but they also raise ethical, social, and legal issues that need to be carefully considered.

Personally, I don't think it's that bad, but maybe there is still a little room for improvement.

How long has AI been around?

A large part of the theoretical and technological foundations have already been developed over the last 70 years.

The first serious proposal in history is considered to be a paper by Alan Turing from 1950, in which he developed his famous Turing test. A method of determining whether a computer is considered "smart" or not. In the Turing test, a human communicates with a machine via text or voice input. The test is passed when the person cannot distinguish whether the other person is a human or a machine.

The term artificial intelligence itself was coined by the American computer scientist John McCarthy at the Dartmouth Conference in 1956. It is considered the birthplace of the discipline. At that time, the participants of the Dartmouth Conference dealt with the following topics: automation of heuristic processes and rule-based skills as well as the ability to play chess at a high level.

Can AI really think for itself?

In short, no.
But what distinguishes an AI from a simple program? Standard system is also called rule-based (if case A occurs, perform step B). With artificial intelligence, however, not every program step is specified, but the algorithm is independently able to adapt its own parameters to a specific problem. As a rule, an AI does not write its own program code (even if there are already initial approaches here), but changes certain parameters within its code in order to find a general pattern in data , derive rules and then apply them to new data.

Data? What does AI need data for?

There is a saying today that data is the new gold. But why is that? This can be explained by the general way AI works.

AI systems require large amounts of data to build up their independent solution competence using machine learning, i.e. AI systems can only be as good as the data used for training.

Facial recognition, for example, only works precisely and robustly if the underlying AI system has been trained and tested again and again with a large number of high-quality, diverse and wide-ranging facial images. For this, the developer often needs several runs. After that, the AI system can independently make decisions about new data sets and, for example, recognize faces. In order to be able to reliably process novel input data, AI systems must be retrained using new data.

Why is AI important?

Because certain problems are so complex that it is impossible to write codes for them by hand. AI will influence almost all areas of our lives and has the potential to achieve great economic and social progress. It can also help us meet the pressing challenges of our time. For example, in the areas of climate and species protection, medicine (e.g. Automated diagnosis of diseases based on image data) and autonomous driving.

Do all companies need to invest in AI now?

According to a survey by the industry association Bitkom, the proportion of companies for which AI is not an issue has risen from 59% to 64%.

Therefore, is it better to wait? The answer is clear: yes and no.
For companies, investing in artificial intelligence is a risk, but also an opportunity. So the first questions should be: How could an AI increase my sales? How could AI reduce my costs and improve services? How can my customers benefit?

Although the large cloud companies such as IBM, Google or Amazon also offer AI solutions, they can quickly become oversized, especially since experts are still needed to implement them successfully. And skilled workers are scarce, especially in the field of AI. All those who currently see no benefit to AI should nevertheless stay on the ball: Because one day it will come to the point that competitors rely on it, and then at the latest it will be too late. Moreover, at the rate at which AI is currently evolving, that time will come sooner rather than later.

At the same time, the costs and resources required for the use of AIs are falling rapidly. For some years now, there have been so-called frameworks that provide the basic tools to quickly set up your own AI networks – TensorFLow and PyTorch are the most widespread.

What are the advantages and disadvantages of artificial intelligence?

Companies that use artificial intelligence primarily benefit from very efficient workflows. Faster production processes that are as error-free as possible, in turn, mean that a lot of money can be saved. Above all, private individuals appreciate the fact that machines and robots take on unpleasant or monotonous tasks. The quality of life increases and there is time for other things. What sounds so nice, however, also has disadvantages. Wherever software is used, for example, there is always the risk that the technology will fail. In addition to software and hardware errors, cyber attacks pose a major risk. In the case of intelligent kitchen gadgets, this may not be so dramatic. But how dangerous will it be if we only use self-driving cars in the future? Another disadvantage has already been much discussed: Sooner or later, computers will be able to replace more and more people. Accordingly, jobs will disappear – especially those that require physical effort and are associated with rather simple tasks. According to forecasts, however, other jobs will also be created. After all, wherever machines are used, people are needed to monitor them.

PASSENDE SERVICES & INSIGHTS

Wir entwickeln Geschäftsmodelle, die dauerhaft und nachhaltig wachsen.
Erfahre mehr über BECEPTUM Geschäftsentwicklung!