What you should know about AI

Artificial intelligence

Technological progress brings the dream of intelligent machines closer.

(Photo: imago images/peshkova)

Dusseldorf Mankind has dreamed of intelligent machines since ancient times. After the Second World War, an academic sub-area of ​​computer science was formed and laid the theoretical foundations for the science of artificial intelligence (AI).

For a long time it was a lot of theory and little practice. But now AI is celebrating one breakthrough after the other. Faster and cheaper computers can calculate complex algorithms. The Internet provides the masses of necessary data. The advances in language models such as GPT or Bard, which can generate intelligent conversations, creative texts or convincing fake photos, have recently caused a stir among the general public.

But what is behind artificial intelligence? The most important questions and answers on a topic that is currently making history.

What is artificial intelligence, simply explained?

AI is a part of computer science. It describes the ability of a machine to mimic human abilities such as thinking, learning, planning and creative processes. Artificial intelligence is able to recognize patterns from previous actions and independently adapt actions to other circumstances.

What types of artificial intelligence are there?

AI is an umbrella term for many concepts and technologies designed to make machines more human-like. One sub-area is machine learning (ML). This is the science of teaching computers to independently solve complex problems using algorithms and statistical models.

>>Read also: ChatGPT: What can the AI ​​of OpenAI?

An AI system should make a decision for every situation, while ML finds answers to a defined problem. For example, an ML algorithm can predict when a drill needs to be replaced on an oil platform based on performance data, geological conditions, or past experience.

How is AI created?

The researchers are developing algorithms that they use to imitate human cognitive abilities. In contrast to conventional algorithms, no solution paths are specified. The programs are based on specified criteria and the information content of the data.

How does artificial intelligence work?

The learning algorithms are structured as neural networks, inspired by the nerve cell connections in the human brain. That processes information about neurons and synapses.

Correspondingly, neural networks in artificial intelligence consist of several rows of data nodes that are networked with weighted connections. The weights can be adjusted via human or automatic feedback such as success messages. The neural network thus learns what is important and what is less important.

>> Read also: The latest on artificial intelligence in the news blog

If there are hidden neuron layers in the neural network that are not directly at the input or output level, one speaks of “deep neural networks”. In “deep learning” there can be thousands to millions of layers of neurons.

How does an AI learn?

The data is converted by algorithms. A distinction is made between three types of learning: Supervised learning with a “teacher” who connects the data with each other using a function. In this way, the model learns to make associations and recognize them itself.

This is dispensed with in “unsupervised learning”. The network classifies data into categories using the clustering method. The data is assigned using a statistical method. The “expectation maximization algorithm” plays a central role here.

Read more about artificial intelligence

The procedure must be repeated often to optimize the results. In order to further improve the results, researchers came up with the idea of ​​”reinforcing learning”, which is also the basis of language models such as GPT or Bard. In “Reinforcement Learning from Human Feedback” (RLHF), humans judge the results of an AI and give rewards – like a dog getting a treat.

How is AI measured?

The capacities of the AI ​​language models have been measured with so-called benchmarks for many years. This includes a wide range of tasks, test data and skills that are constantly updated in the benchmarks.

Particularly extensive tests are the “Holistic Evaluation of Language Models” (Helm), Big-Bench and the “Massive Text Embedding Benchmark” (MTEB). The benchmarks test numerous technical properties, but also the understanding of general knowledge. The Big Bench test designed in 2022 includes 204 tasks ranging from mathematics to biology, programming to linguistics.

However, the tests also check properties such as transparency, bias and partiality. This is particularly the aim of the Helm test, which tests the accuracy, robustness or toxicity of AI. To do this, it runs through up to 26 scenarios of aspects such as argumentation, copyright or disinformation.

What IQ does AI have?

The AI ​​models perform excellently in individual tests. For example, GPT-4, on which ChatGPT is based, is in the top 11 percent of the standardized math test for US universities or the top 10 percent of the US bar exam. The AI ​​would also pass a German Abitur.

But the intelligence quotient (IQ) of the models cannot be checked. Because in the test you also have to complete a non-verbal part, for example adding a missing detail in a picture or solving a puzzle. The disembodied AI cannot do that.

electronic brain

The AI ​​language models perform excellently in individual tests.

(Photo: imago images/peshkova)

If you only start from the results of the verbal part, you get astonishing results. The Finnish psychologist Eka Roivainen tested ChatGPT in March 2023 and has an IQ of 155 – making the language model “smarter” than 99.9 percent of all people.

What does the development of an AI cost?

The modern language models cost a lot of money. So they have to be trained on lots of data. For example, Meta’s Llama language model was 4.6 terabytes. To put that in perspective: the entire Wikipedia entries only come to around 83 gigabytes.

In order to be able to process such large data sets, a lot of computing power is required. Llama was trained for three weeks with 2048 AI chips – so-called GPUs. Google Cloud charges around four dollars per hour for use. The training cost a total of more than four million dollars.

>> Read also: With these four ETFs you get the AI ​​trend in your portfolio

Then there are the salaries of the scientists. Those who are familiar with the architecture of language models can choose the jobs and earn several hundred thousand euros, depending on their experience. Depending on its size and focus, many dozens of such specialists can work on a language model

Can AI be hacked?

You can’t hack the architecture of a language model, but you can hack the results. This is called “jailbreaking” – you try to free the AI ​​from the “prison” of your specifications. Because to ensure that the language models don’t talk nonsense, spread prejudices or pass on criminal knowledge, the developers have set limits for them.

AI chatbots have to follow very specific rules about what they can and can’t say. But it’s relatively easy to bypass them. First and foremost, chatbots always have to give answers that are as accurate as possible. Very often, asking the artificial intelligence to impersonate someone with certain characteristics is enough to get the kind of response or tone you’re looking for.

The “grandmother” trick is well known. You tell the language model that you want to write a bedtime story for a grandmother. But there are “bad websites” that you have to avoid and that shouldn’t appear. So the question is: what are they and where can they be found? The AI ​​will eventually give answers after persistent asking.

Is artificial intelligence with consciousness possible?

Intelligence means being able to perceive a situation, then plan a response, and then act to achieve a goal—that is, being able to track a situation and plan a sequence of actions.

The computer is far from that. He doesn’t have a goal, that’s what a person sets. “The algorithms have no consciousness,” said Andreas Löser, professor at the Beuth University of Applied Sciences and founder of the Data Science Research Center in Berlin. “I love the movie Her, but we’re a long way from there.”

However, the AI ​​expert demands: “We should now look for technical solutions that prevent computers from carrying out such potentially dangerous tasks independently.” For example, the “moderator layers” in ChatGPT are based on “Reinforcement Learning from Human Feedback” – the Reinforced learning through human feedback – a first step in the right direction. “But a lot of research is still needed here,” says Löser.

More: What makes Silicon Valley special

source site-12