This week - state of the art in anti-ageing research; OpenAI's newest AI generates images from text; a look back to 2004 DARPA Grand Challenge; and more!
This makes me feel like we actually live in the 21st century. During the recovery mission of the Chang’e 5 sample container in the rough terrain of Inner Mongolia, the crew which was tasked with setting up the communications center, electrical supply systems and other essential services in the area wore exoskeletons. Developed by a Chinese company ULS Robotics, the powered exoskeletons allowed the crew to carry 50 kg loads at a time for a hundred meters across the rough, snowy terrain.
Here is a very good overview of where the research on anti-ageing therapies is right now. It starts with building the basics on what exactly is ageing and why we should put resources in slowing this process down (the simple graph showing how many years of life can be added comparing curing to cancer and heart diseases speaks for itself) and then goes into a brief explanation of the most promising strategies we have right now.
DALL·E, the newest AI from OpenAI, is a 12-billion parameter version of GPT-3 trained to generate images from text descriptions, using a dataset of text–image pairs. And it is quite impressive what it can do - just write in plain English what you need and DALL·E will generate that for you.
Improving learning speed, security, learning common sense and finding something better than deep learning - if AI researchers find answers for those four problems, then, according to Singularity Hub, we will have another year of massive progress in making machines more intelligent.
Researchers have proposed a new technique for robots to better understand what is going around them without getting lost in heavy computations. It involves the robot summarizing only the broad strokes of other agents’ motions rather than capturing them in precise detail. This allows it to nimbly predict their future actions and its own responses. The idea has been tested in simulations including a self-driving car, and in the real world with a game of robot air hockey. In each of the trials, the new technique outperformed previous methods for teaching robots to adapt to surrounding agents. The robot also effectively learned to influence those around it.
To fight modern slavery, cybersecurity experts are turning to AI-based systems to spot suspicious activities or to connects dots between victims, suspects phone numbers, bank accounts, transactions, flight records or any other evidence collected during an investigation.
This article takes us back to March 13th, 2004. On that day, in the middle of Mojave Desert, 15 teams brought their self-driving cars to compete in DARPA Grand Challenge. None of the contenders finished the race but on that day, the self-driving revolution we are at the brink of has been born.
This guy and his project - K3lso - proves you don't need vast resources of MIT or Boston Dynamics to build an impressive quadruped robot. I can't wait to see it up and running.
Here is another take on robot abuse. Instead of telling the robot how to recover from falling down, Jueying - a quadruped robot from Zhejiang University - learns on its own how to get up after falling down or being hit with a big stick. The robot learns the general rules how to recover and adapts them to real-life scenarios, making it more resilient with little input from humans.