Issue #211 - how deepfakes can be used in espionage; Boston Dynamics-like robot takes revenge in a fan film; why robotics needs Amazon
View this email in your browser
This week - how deepfakes can be used in espionage; Boston Dynamics-like robot takes revenge in a fan film; robots and pizza; why robotics needs Amazon; and more!
MORE THAN A HUMAN
► Michio Kaku: Genetic and digital immortality are within reach (3:12)
Technology may soon grant us immortality, in a sense. There are already research projects and companies that try to overcome death. Some of them went the digital path to find a way to digitise our brains for future generations. Others explore genetics and how to extend human life through gene engineering.
Upgrade Your Memory With a Surgically Implanted Chip
Over the past five years, DARPA has invested $77 million to develop devices intended to restore the memory-generation capacity of people with traumatic brain injuries. Last year two groups conducting tests on humans published compelling results. However, don't expect you can put those implants soon to boost your memory, one of DARPA's director says.
Exoskeleton in Rehab
ARTIFICIAL INTELLIGENCE
Experts: Spy used AI-generated face to connect with targets
Katie Jones, the person you see in the picture above, does not exist. Her face was generated by a computer but despite that, people behind her were able to connect on LinkedIn with high profile people. Experts who reviewed the Jones profile’s LinkedIn activity say it’s typical of espionage efforts on the professional networking site, whose role as a global Rolodex has made it a powerful magnet for spies.
MIT’s New AI Can Look at a Pizza, and Tell You How to Make It
MIT researchers have taught a new artificial intelligence how to reverse engineer pizza. They’ve trained an AI to look at a photo of pizza and determine both the toppings on it and the order in which they were placed. The research could lead to a system that could process any photo of food and produce a recipe for it — an exciting new example of AI’s potential to disrupt the food sector.
Will AI Achieve Consciousness? Wrong Question
"We don't need artificial conscious agents", states the article. "What we are creating are not—should not be—conscious, humanoid agents but an entirely new sort of entity, rather like oracles, with no conscience, no fear of death, no distracting loves and hates, no personality (but all sorts of foibles and quirks that would no doubt be identified as the “personality” of the system): boxes of truths (if we’re lucky) almost certainly contaminated with a scattering of falsehoods."
ROBOTICS
► Boston Dynamics: New Robots Now Fight Back (3:31)
Treat your robots well or else... Nice short video from Corridor Digital.
► Jeff Bezos controls robotic arms (1:14)
Jeff likes robots.
Why Robotics Needs Amazon: Analysis From re:MARS Conference
Amazon's influence on robotics is huge. From autonomous warehouse robots to Prime Air drones to offering tools for robotics in AWS, Amazon continues to invest in this space and becomes one of the leading forces in robotics.
Domino’s serves up self-driving pizza delivery pilot in Houston
If you live in Houston, your pizza from Domino's might be delivered by an autonomous robot. Domino's is launching a pilot for self-driving pizza delivery in Houston in partnership with Nuro - an autonomous-driving tech company with engineers from Waymo, Tesla and Uber to name few.
Sensitive Whiskers Could Make Small Drones Safer
This whisker sensing system not only can detect air pressure from objects even before they make physical contact but it brings us one step closer toward robotic cats.
MIT’s new AI for robots can ‘feel’ an object just by seeing it
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a brand new AI that can feel objects just by seeing them – and vice versa. The new AI can predict how it would feel to touch an object, just by looking at it. It can also create a visual representation of an object, just from the tactile data it generates by touching it. The researchers made the robotics arm touch 200 household objects 12,000 times, and recorded the visual and tactile data. Based on that, it created a data set of 3 million visual-tactile images which robot used to predict how the item will feel.
Thank you for subscribing,
Conrad Gray (@conradthegray)
If you have any questions or suggestions, just reply to this email or tweet at @hplusweekly. I'd like to hear what do you think about H+ Weekly.
Follow H+ Weekly!