US tech industry is struggling for guidelines for AI

new York Well, yes. In March, Tesla boss Elon Musk had called for a six-month development freeze for new applications based on artificial intelligence (AI) in an open letter. This is a “danger to humanity,” Musk warned. It has now become known that he is apparently having an AI language model developed himself.

As the “Financial Times” reported on Friday, citing insiders, Musk is working on building an alternative to the ChatGPT AI system used by Microsoft. For this, the billionaire founded the company X.AI, put together a development team and bought graphics cards on a large scale. He is also in talks with investors in his Tesla and SpaceX groups about getting involved.

The turnaround shows how big the hype is that the idea of ​​ChatGPT has unleashed. Google and the Facebook group Meta, banks and start-ups are also in AI fever. On Thursday, Amazon introduced a new service called Bedrock, which is intended to enable its own cloud customers to create AI-generated texts. Bedrock is “already big and great,” enthused Amazon boss Andy Jassy.

Critics have doubts about the “awesomeness” of the new systems. Many problems remain unsolved: the AI ​​systems “hallucinate”, are bursting with false information and all too often do not disclose their database. The latest plans by the ChatGPT makers – such as scanning everyone’s iris – reinforce the uneasiness.

The US tech industry is discussing more intensively than it has been for years: What can an ethical, responsible use of AI look like? Where are the limits of technology and what supervision do AI applications need? Four guidelines emerge.

Guideline 1: Clarify boundaries

An important discussion forum is the “Partnership on AI” (PAI), a non-profit organization. The approximately 100 partners include Apple, Google and Meta, the University of Berkeley and the US Psychological Association. The goal is the development of safe, fair and transparent AI applications.

PAI met in New York last week. Francesca Rossi, global head of AI Ethics at IBM, reported that the members agreed that the technology offers “extremely great opportunities”: Nobody should be afraid of a “killer robot” like in the science fiction film.

“Language models write down the most likely next word after the first 300 words. They don’t think, they can’t lie or tell the truth,” says Rossi. We are a long way from a “self-aware” AI that could turn against its human creators.

Nevertheless, there are risks – if you are not aware of the limits of the technology. “AI is neither good nor bad. But she’s not neutral either,” says Rossi. Large language models like ChatGPT would take development to a new level.

Elon Musk

The Tesla boss called for a six-month AI development freeze – and is now launching his own start-up.

(Photo: Reuters)

In the past, AI only interpreted. “For the first time, it is now generating content, i.e. creating something completely new on the basis of input – be it text, images or videos,” says Rossi. “AI can generate inappropriate content or even hallucinations.”

That could “lead to wrong conclusions”. Instead of looking into the distant future, researchers and companies would have to analyze the current limitations of the models. “The biggest danger is overestimating the capabilities of the AI,” says Rossi.

Guideline 2: Create transparency

An example of the frontiers of technology comes from Amazon. The group had developed an AI program for recruiting, which was supposed to identify top talent among its applicants on the basis of CVs. Since most Amazon programmers are male, the AI ​​concluded that women perform poorly and rejected their applications.

Even when the developers taught the AI ​​not to make appropriate negative selections, it found other ways to discriminate against non-male or even non-white applicants. Amazon scrapped the project in 2018.

In the tech world, the phenomenon is known as the “alignment” problem. “AI systems are trained to maximize an outcome toward a specific goal,” explains PAI Chair Rebecca Finlay. “However, if your model is under-specified, you will get useless results.”

Francesca Rossi

“AI is neither good nor bad. But it’s not neutral either,” says the AI ​​expert from IBM.

(Photo: Leverhulme Center for the Future of Intelligence)

The problem: It is often not possible to see how AI systems make their decisions. The result: “We have to disclose the processes.”

The PAI members have spoken out in favor of always publishing AI models with so-called model cards, says Finlay. “These data sheets transparently document the components that make up the system throughout the entire development process.”

Guideline 3: Check the database

Closely linked to the question of which components the algorithm consists of is the database on which it comes to its conclusions. The larger the amount of data fed to the AI, the faster it can evolve. It is not for nothing that Amazon boss Jassy emphasizes the advantage of having your own data treasure.

But large amounts of data alone do not lead to useful AI. Microsoft had to take its chatbot Tay offline after just 24 hours in 2016. The Facebook company Meta’s Galactica program, which draws on 48 million sources, mainly from the Internet, lasted just three days in November before racist or meaningless statements led to its shutdown.

Read more about artificial intelligence here:

“One way to minimize hallucinations is to control the database used,” says IBM computer scientist Rossi. “The AI ​​model then does not use the Internet freely, but rather a verified pool of information. This is useful in corporate networks.”

IBM relies on this approach. It is also recommended by the world’s leading geoinformation service Esri from California. “We’ve had AI for a long time, both in satellite imagery and in text,” says founder Jack Dangermond. They are currently examining how to integrate large language models such as ChatGPT into their own solutions. “But we only apply AI to our own data to avoid risks and plagiarism.”

Guideline 4: People behind the wheel

The examples show that the human factor will remain important in the future. You can’t regulate the technology yourself, it’s developing too quickly for that, says Christina Montgomery, who advises the US Department of Commerce on AI issues. “But we should regulate their use.”

The US government is already investigating possible guidelines. Some companies have banned the use of ChatGPT altogether. Others are experimenting with having a second AI check the results of the first system under human supervision.

According to Montgomery, the most important principle that the PAI also upholds is: “A user must always know when he is talking to an AI. And there must always be a human involved in the decision-making process.”

For the tech industry, this recommendation is like a wake-up call: as part of their most recent round of layoffs, PAI members Microsoft, Meta, Google and Amazon of all people fired many of their AI ethics teams. And Musk isn’t leading by example either. He made short work of his short message service Twitter – and fired the entire AI ethics department.

More: Creativity from the machine – This man will change your life

source site-16