Exactly five years ago, the first issue of H+ Weekly has been sent. The next week the second issue went out, then another one, and another one, and so on. 262 weeks later and we are here. It is an incredible journey and I want to say thank you for being a part of it. I shared how this journey looked like in a post over here.
But I will share what I want to create next here. I want to create a community. I want H+ Weekly to belong to the community, not just to me. I want H+ Weekly to be bigger than me. I hope we will figure out how exactly will it look like.
Let's now dive into the next issue of H+ Weekly!
This week - OpenAI releases its first commercial product; DeepMind teaches AI to cooperate; what good can come out of the pandemic for bioscience; and more!
More Than A Human
A supernumerary robotic device is of a type that adds functionality to an existing system. In this case, the team in Canada added a third arm and associated three-fingered hand to a human subject. The arm is strapped to the waist and hips of the user and it is remotely controlled. The future plans include adding AI and getting rid of big power source in favour of portable batteries.
Is there a limit to human intelligence? Or will our brainpower be eclipsed by intelligent machines in the future? BBC asks leading intelligence experts to share their views.
Robert Miles presents Stuart Russell's list of 10 reasons why people ignore AI safety, from "we will never get to AGI" to "just have humans involved in the process and it will be fine" to "we can just turn it off".
OpenAI launched its first commercial product - a cloud service based on their text generating algorithms. The idea is to offer a general-purpose “text in, text out” interface, allowing users to type commands or queries in plain language and the AI does the rest.
DeepMind really likes to make AI play games. This time, they made AI play Diplomacy to learn to cooperate. “We propose using games like Diplomacy to study the emergence and detection of manipulative behaviours… to make sure that we know how to mitigate such behaviours in real-world applications,” the researchers wrote in the paper. “Research on Diplomacy could pave the way towards creating artificial agents that can successfully cooperate with others, including handling difficult questions that arise around establishing and maintaining trust and alliances.”
The US Air Force is hoping to pit an autonomous drone equipped with an artificial intelligence-driven flight control system against a fighter jet with a human pilot in a little over a year. The service has described this effort in the past as a "big moonshot" that could revolutionize air-to-air combat in ways that have so far been limited to the realm of fiction - at least as far as we know.
Armed, fully autonomous drone swarms should be classified as WMD because of their degree of potential harm and inherent inability to differentiate between military and civilian targets—both of which are characteristics of existing weapons categorized as WMD, argues Zachary Kallenborn.
The ongoing COVID-19 crisis mobilises thousands of people across many fields to work together and to find a cure for the disease. NEO.LIFE asked prominent bioscientists and big thinkers if there might be glimmers of hope that will emerge when the “all clear” is finally declared.
I've recently read David Wood's new book - RAFT 2035: Roadmap to Abundance, Flourishing, and Transcendence, by 2035. David Wood provides a set of ideas on how to tackle big challenges ahead of us with new tools. Tools that we either already have or will have soon and how we can use them to face those big challenges - from climate change to improving mental and physical wellbeing to better politics and even venturing beyond Earth, into space. What I liked about it is that it shows it is possible to make a world which works and where humanity can flourish.