Issue #157

This week – Google to end partnerships with Pentagon and releases its AI principles; Norman, the psychopathic AI; building computers with living neurons; and more!

More Than A Human

This Smartphone Pioneer Is Fighting to Create a Transhumanist Superdemocracy

Meet David Wood, the smartphone pioneer who co-created Symbian and a transhumanist who’s calling for changes that would radically rethink how people and politicians interact. He also organises [London Futurists]( Futurists/?chapter_analytics_code=UA-42822203-1) meetups where other transhumanists and futurists explain their visions of the future.

The Augmented Human Being

Here’s an interview with George Church, the pioneer in genetics. There is a video inside (49 minutes long) and a transcription. Topics discussed – safety in genetic research, the rise of CRISPR and playing with the human genome.

Mind Uploading

Another excellent and in-depth video by Isaac Arthur. This time, Isaac talks about mind uploading how it works and some of its implications for concepts like consciousness, identity, and individuality.

Artificial Intelligence

Google ‘to end’ Pentagon Artificial Intelligence project

Google was under a big pressure not only from the outside but also from the inside when the news about their partnership with Pentagon become public. This week, the sources in the company said that Google will not renew the contract with Pentagon which will expire next March.

AI at Google: our principles

This blogpost outlines the principles that Google will obey when creating AI systems. The rules include such things like making socially beneficial AI, avoiding bias, testing, safety, accountability and so on. Interesting is the section “AI applications we will not pursue “, which includes weaponising AI. These principles feel like Google wants to go back to “Don’t be evil” mantra, at least in AI.

10 Unsettling Artificial Intelligence Scenarios

In this video, John Michael Godier goes through 10 possible scenarios for what can happen if we create a superintelligence which does not result in the extinction of humanity. These scenarios include AI wanting to go to space, AI stopping another superintelligence to rise, to AI committing suicide.

Are you scared yet? Meet Norman, the psychopathic AI

Here’s another example of how training data impacts the machine learning agent. What happens when you train AI on a dataset containing images of people dying in gruesome circumstances? You get Norman, an AI that sees the world as an extremely bleak place. Researchers have shown Norman images from Rorschach test and it could see was death and disembodies humans.

Is strong AI inevitable?

This article tries to answer the question in a slightly different way than others. It looks at two different types of intelligence – weak (associated with predictions) and strong (associated with explanations). Then it goes from one to another and checks if we can make a jump to strong AI with them and wants you to decide what is the answer.

AI Winter Is Well On Its Way

AI Winter is a term used to describe the periods of stagnation in AI research after building high hopes and saying strong AI is just around the corner. There were a couple of AI Winters in the past. Some people think that we are close to another AI Winter in which deep learning systems will fail to meet high expectations. This article points out subtle movements that indicate deep learning is starting to fail to meet the hype and predicts that biggest blow to deep learning fame will be caused by self-driving cars (or lack of them).


Takeaways to be delivered by drone in Shanghai

You can now order a takeaway to be delivered by a drone. If you live in Shanghai. Food delivery service, owned by Alibaba, has been given permission by the authorities to deliver takeaways by drone along 17 routes in Shanghai’s Jinshan Industrial Park.

Don’t fear my robots, says the Boston Dynamics founder who makes machines that drive the internet wild

At a robotics conference in late May, Marc Raibert, CEO of Boston Dynamics, told the AP that technological advances come with risk — but that’s because of humans, not machines. “We think about that, but that’s also true for cars, airplanes, computers, lasers,” said Raibert. “Every technology you can imagine has multiple ways of using it. If there’s a scary part, it’s just that people are scary. I don’t think the robots by themselves are scary.”

DJI and Taser Maker Axon Are Teaming Up to Make Cop Drones

Axon announced it is partnering with dronemaker DJI to sell surveillance drones to U.S. police departments through a program dubbed “Axon Air.” The Taser and body camera manufacturer will work with DJI to offer drones linked to, Axon’s proprietary cloud-based data management system.


The Birth of Wetware

Oshiorenoya Agabi and Benjamin Sadrian are making neural networks. But instead of code, they use living neurons.

CRISPR Fans Fight for Egalitarian Access to Gene Editing

CRISPR is a technology with a potential to change the world like nothing before it. Because of that, some people say access to the technology should be open to avoid it being exploited by the wealthy and blocked from those who actually need it.

The FDA Puts the Brakes on A Major CRISPR Trial in Humans

In December, gene-editing company CRISPR Therapeutics announced a partnership with biotech company Vertex to develop CTX001, the world’s first gene-based treatment for sickle cell disease (SCD). Now, the U.S. Food and Drug Administration (FDA) has denied the companies’ request to move forward with a early-phase trial of CTX001 in adult volunteers, placing a “clinical hold” on the application. According to a CRISPR Therapeutics press release, the FDA has “certain questions” it wants resolved before it gives the go-ahead to the human CRISPR study.

Subscribe to H+ Weekly

H+ Weekly is a free, weekly newsletter with latest news and articles about robotics, AI and transhumanism.

H+ Weekly uses cookies to ensure you get the best experience on our website. Learn more about our Privacy Policy.