Miriam Meckel: Ambient Computing: The Alternative to the Metaverse

In 1989, former Soviet President Mikhail Gorbachev reportedly said, “He who comes too late will be punished by life.” A sentence that could give Vladimir Putin food for thought at the moment. But the opposite can also be true: if you come too early, life will punish you – for example in the far-reaching rejection by customers or target groups.

This is what happened to Google almost ten years ago, when the technology group wanted to launch “Google Glass”, the first pair of glasses that was connected to the Internet. Before the market launch could really start, the product became a non-starter. Data protectionists resisted the introduction of the glasses, with which the wearer could spy on their surroundings without the other person noticing. Whoever wore the Google glasses symbolically became a “glasshole”.

At the very end of its annual I/O developer conference, Google presented another set of glasses. The new model has no name yet, apparently no camera and looks like normal glasses. But it can simultaneously translate conversations into other languages. In a demo video, an elderly woman talks to her daughter, one speaking Mandarin and the other English. And they get along.

That may come as a surprise, but there is an exciting strategy behind it. The revival of glasses is an indicator of how Google sees the future of human-machine communication – as a permanent exchange that makes life easier.

Top jobs of the day

Find the best jobs now and
be notified by email.

In technical terms, this is called “ambient computing”. With the help of artificial intelligence, smart agents in glasses, lamps, loudspeakers, even entire lighting, heating and energy supply systems communicate seamlessly and seamlessly with the people around them. And we interact seamlessly with the machines that enrich our lives with information, translation, light and music.

In this idea lies the future vision of Google’s most important business, Internet search. On the one hand, the input mask in which we type in the search term will change. It will eventually be replaced by an artificial intelligent virtual companion, available 24/7 to respond to our needs and desires without our having to actually speak or type them. Looks, movements, behavior patterns, they all become part of a calculation for a perfectly personalized service that offers each individual support in what he or she needs at any time.

Secondly, this “environment search” integrated into everyday life will change the consumer world. An example: When looking for a dress for a wedding party, I can then use text, language or pictures to describe the feeling that the dress should give me and that I want to radiate at the party.

Google then makes me an offer that I can change and adjust until the dress feels right. Size or cut doesn’t matter at first, because this dress is then produced on the basis of my preferred styles and height.

The better these AI language models get, the easier and more intuitive it will be for humans

So this is about much more than an internet search. It’s about the personalized creation and manufacture of new products that an enriched intelligent search has created for us – from haute couture to haute creature.

If this all feels like a scene from the 2013 Hollywood film Her, it’s because it is. Joaquin Phoenix is ​​no longer the only one speaking with his operating system (which is actually adorable thanks to Scarlett Johansson’s voice). We can do that too.

What technically came across as pure science fiction in 2013 is now realistically approaching. The rapid development of natural language processing – the analysis of natural language – as an orientation of machine learning in the field of artificial intelligence makes this possible. Google runs this variant of machine learning via the Tensor Flow chip directly on its devices, whether smartphones, Google Home or the latest model of glasses. The better these AI language models get, the easier and more intuitive it will be for humans to interact with them. We then speak to the virtual agents as if we were talking to a good friend – and exactly as “Her” has outlined it.

Ambient Computing: Alternative model to the Metaverse

Google’s research department Deep Mind has just presented in a scientific paper an example of a “generalist agent” that uses language models (Large Language Models) to truly work miracles. The agent called “GATO” can play computer games, describe pictures, chat or control a robotic arm to stack building blocks on top of each other. This is actually revolutionary. Man-machine communication is becoming flexible. All question forms (text, image, video, gesture) can be translated into all answer forms.

If your hair stands on end in the face of such prospects, a comparison with the current hype topic might help: the Metaverse. Ambient Computing could become the gateway to an AR world, a world enriched by “Augmented Reality”, and thus an alternative model to the Metaverse.

What would we prefer to do in the future: Enrich our real world with artificial intelligence in order to find our way around better and to realize our wishes? Or spend a growing part of our life in a virtual second world while our real environment is idling away? One word was missing at this year’s Google I/O conference: the metaverse.

More: What’s behind the Metaverse hype

source site-17