Feed aggregator

Elon Musk on Tesla fully autonomous car: ‘What we’ve got will blow people’s minds, it blows my mind… it’ll come sooner than people think’

Robotics VO - Thu, 08/04/2016 - 15:44

During a conference call today, Elon Musk talked about Tesla’s progress in level 4 fully autonomous driving and while he didn’t want to make an announcement on the call, he said that it is coming sooner than people think:, “What we’ve got will blow people’s minds, it blows my mind …it’ll come sooner than people think.”

Musk’s most recent prediction placed the technology being ready in Q4 2017, around the time Model 3 will enter production.

Of course, Musk always emphasises that the technology will be ready long before regulators will approve the technology for the public.

But as Tesla showed during the implementation of the Autopilot, the automaker introduces the necessary hardware in its fleet and adds features later through over the air software updates.

Not one to back away from making outlandish claims, Musk was particularly enthusiastic about Tesla’s prospect in self-driving today:

“It blows me away the progress we are making. And if it blows me away, it’s really going to blow away other people too when they see it for the first time.”

When asked whether level 4 autonomous driving is a hardware or software problem at the moment, he said that the hardware necessary for self-driving currently exist, it’s only a question of software.

Elon added that Tesla is focusing on developing an advanced narrow AI and improving its advanced neural maps. He also added the system needs to be able to run on a “reasonably small computer that can fit in the car.”

An announcement is expected by the end of the year. In June, when asked directly if the Model 3 will be autonomous, Musk looked like he was about to give a direct answer, but after a few seconds of hesitation he said that there will be another big event “maybe toward the end of the year” during which he will talk in more details on the subject.

He then added that it will be “really big news” when he starts talking about it and that Tesla will do the “obvious thing”.

The “obvious thing” is now expected to be fully self-driving technology. With all the hype, anything else would be somewhat disappointing.

Full text of the article by Fred Lambert in electrek.

How Lockheed Martin's SPIDER Blimp-Fixing Robot Works

Robotics VO - Wed, 08/03/2016 - 18:36

Airships, which are distinct from blimps by being much more rigid and sounding much less silly, are one of those unusual technologies that has been undergoing a resurgence recently after falling out of favor half a century ago. Airships have potential to be a very practical and cost effective way to move massive amounts of stuff from one place to another place, especially if the another place is low on infrastructure and has a reasonable amount of patience.

Lockheed Martin’s Skunk Works has been developing a particular kind of airship called a hybrid airship, which uses a combination of aerodynamics and lifting gas to get airborne, for the last decade or so. The P-791 technology demonstrator first flew in 2006, and a company called Hybrid Enterprises is taking Lockheed’s airship technology to commercialization. Their LMH-1 will be able to carry over 20,000 kilograms of whatever you want, along with 19 passengers, up to 2,500 kilometers, and it’s going to be a real thing: Hybrid Airships recently closed a US $480 million contract to built 12 of them for cargo delivery.

As part of the construction and ongoing maintenance of an airship, it’s important to inspect the envelope (the chubby bit that holds all the helium) for tiny holes that, over time, can have a significant impact on the airship’s ability to fly. The traditional way to do this involves humans, and like most things involving humans, it’s an expensive and time consuming process. To help out, Lockheed Martin has developed “Self-Propelled Instruments for Damage Evaluation and Repair,” or SPIDERs, which are teams of robots that can inspect airship skins for holes as well as representing one of the less ludicrous robot acronyms that we’ve seen recently.

For details on SPIDER, we spoke with hybrid airship engineer Ben Szpak about where the idea came from, how the robot works, and what their plans are for the future.

IEEE Spectrum: Can you tell us about this history of this project? Where’d you get the idea for the design of the robot, and were there other designs that you tried before finalizing this one?

Ben Szpak: Airships have been inspected for pinholes in the same way ever since they were first designed. A crew of workers partially inflates the envelope and locates holes with a bright light while the team on the inside patches them up.

The SPIDER concept originated when the P-791 Hybrid Airship demonstrator flew 10 years ago. The idea of eliminating the pinhole check, a serial step in the production schedule, is attractive when you are producing a significant number of large airships each year. SPIDER allows this check to be performed in parallel with the airships final assembly.

The SPIDER design grew organically through many iterations into the robust and simple approach you see in SPIDER. We tested everything from stabilized two wheeled batons to centrally pivoting four wheeled designs before settling on our design. We were able to use our expertise in advanced manufacturing to 3D print parts on demand, allowing us to rapidly build and test new designs and learn from our successes and failures.

Why is the task that SPIDER performs important, and why did you decide that a team of robots was the best way to tackle it?

Helium is a very small molecule and a lot of effort is put into developing helium-tight fabrics for the envelope. Pinholes can be a huge problem if they aren’t eliminated, and they are hard to detect with the human eye. The process of patching the pinholes over a large envelope is also very time consuming and tedious. Robots have the advantage of running continuously and operating on the bottom, top, and sides of a fully inflated envelope where it can be difficult for people to reach. Since SPIDERs have evolved to a rather small robot in order to operate over curved surfaces and climb the envelope, a team of SPIDERs allow for an even faster inspection and repair process.

What are some unique challenges about operating robots across the skin of an airship, and how does SPIDER solve them?

SPIDER has to operate over a non-uniformly curved surface while also propelling itself up, down, and upside-down. We handle the curved surface by designing the chassis to comply with the envelope, twisting and bending slightly to ensure a tight coupling to its mating half. The challenge of driving in strange orientations on the envelope and still accurately measuring movement is done using optical encoders watching the envelope pass rather than shaft encoders which are susceptible to wheel slippage. We also use the known shape of the envelope to approximately locate the robot on the envelope with accelerometers.
“We’ve learned a lot about autonomous inspection and repair with SPIDER, and . . . we are working on more ideas, like SPIDERs roaming around in-flight for larger airships.”—Ben Szpak, Lockheed Martin

How many robots (and how much time) does it take to cover the airship, and how does the patching mechanism work? Are these robots that would be monitoring the airship continuously, or would they be deployed to perform maintenance at specific intervals?

We expect to utilize five to six SPIDERS on the LMH-1 vehicle, which has roughly 80,000 square feet of envelope, in less than five days depending on the number of pinholes found. The manual process of locating and patching pinholes can take about ten days and doesn’t happen in parallel with the production process, which is a major benefit of SPIDER. The patching mechanism works similarly to a handheld label applicator, applying a patch over the hole once the SPIDER is positioned. We will deploy the SPIDERS during airship final assembly and at major maintenance checks.

Can you talk about the future of SPIDER? Are there ways in which you’d like to upgrade or improve the robots, or will the success of these robots lead to other robotic systems being developed in this space?

We’ve learned a lot about autonomous inspection and repair with SPIDER, and those lessons will be applicable as more opportunities arise in this field. We are working on more ideas, like SPIDERs roaming around in-flight for larger airships.

Anything else cool about these robots that you can tell us?

We were able to build SPIDER with almost entirely off-the-shelf components, while designing our own critical parts, and integrating them together. This demonstrates the amazing growth of the robotics community in the past few years and demonstrates that with the right applications robotics can be used to solve a vast set of challenges.

Fulltext of the article by Evan Ackerman in IEEE SPECTRUM.

Delphi Intends To Deliver Fully Autonomous Cars By 2022

Robotics VO - Wed, 08/03/2016 - 18:08

Delphi Automotive was the first car company to complete a cross-country road trip using autonomous technology, driving from San Francisco to New York in April 2015. Next, the company will test the technology on an international stage.
Executives said Monday the company has entered into a strategic partnership with the government of Singapore to test autonomous vehicles. Over the course of the three-year pilot project, the global automotive supplier will conduct trials of low-speed, mobility-on-demand vehicles along predetermined routes, gradually introducing systems with greater capability.
Starting this week, the same Audi SQ5 that crossed America will be begin driving in Singapore’s “one-north” business park. Early next year, Delphi intends to transition into a second phase of the project, which will involve six electric vehicles outfitted with autonomous technology operating in ride-hailing operations. Then in 2018 or 2019, executives say they intend to test fully self-driving vehicles—with no drivers as fail-safes—on Singapore’s streets.
Should all go as scheduled, Delphi says it would then move onto grander plans for production of driverless vehicles capable of shuttling on-demand customers or cargo by 2022.
With Apple’s plans in the realm of self-driving cars still a secret, Google’s ever-evolving, and other automakers offering nebulous timelines toward full autonomy, the Delphi announcement marks a milestone by offering a concrete time frame for reaching Level 4 autonomy, in which cars can complete entire journeys without human intervention, for mass-market purposes.

Singapore selected Delphi via a closed-bid process. “This is an exciting opportunity for us,” Glen DeVos, the company’s vice president for engineering, tells Car and Driver. “We think this is an important but appropriate step in developing automated mobility on demand. It will be in specific areas, so think of it as geo-fenced. That limits the complexity of the deployment. We expect we will learn a tremendous amount, and by the end of the project, we can move onto the next phase, which assuming success, is to operationalize.”

DeVos isn’t saying which brand of electric cars the company will utilize for the initial part of the project. Going forward, he sees a symbiotic link between autonomous technology and electric vehicles. The self-driving technology itself can be used in any vehicle, from cars, buses, and taxis to trucks and purpose-built mobility pods.

Singapore may be most interested in the latter. The city-state’s Land Transport Authority, in the process of expanding its rail network by 2030, wants to examine how autonomous cars could help transportation planners solve the so-called “first and last mile” dilemma—how to help commuters get from their homes or offices to and from mass-transit options. Government officials hope they can reduce overall traffic congestion and vehicle pollution by encouraging more people to use public transit. Delphi will design vehicles tailored to that purpose.

“From a user-experience standpoint, that’s a very specific use case and a very different expected experience than using your own car,” DeVos said. “So we need to make sure we design that experience so it’s enjoyable, safe, and effective. On one hand, the technology has to be there. But it has to be a really enjoyable experience for the end customer, so they get really excited about it.”

Beyond evaluating the technology itself, the other component of the project both Delphi and Singapore are focusing on is the cost of autonomous operations. In city environments, the cost of operating a vehicle with a commercial driver can exceed $3 per mile, according to Delphi estimates. Autonomy offers the potential, the company says, to reduce those expenses to as little as 90 cents per mile. Such savings could offer tremendous potential for a ride-hailing service like Uber or Lyft, not to mention delivery services like FedEx or UPS. As the pilot project develops, one key question Delphi will attempt to answer is which will be ready for autonomous operations first: the passenger fleet or the cargo-hauling market.

“That’s part of what the market is sorting out right now,” DeVos said. “Some OEMs are making investments in one area and not the other, necessarily. There’s a real divergence of opinion. We believe it will come from one of the delivery methods for a big logistics company. They’ll absolutely be looking at how to take advantage.”

Whenever Delphi is ready to move into a stage of producing driverless pods, it says it’s open to an alliance with an OEM or another company.

The pilot project in Singapore is one of several concurrent steps toward testing autonomous technology for Delphi. The company says it plans two similar pilot projects for North America and Europe, but details on those aren’t yet firm.

Full text of the article in Car and Driver Blog.

Elon Musk's open source OpenAI: We're working on a robot for your household chores

Robotics VO - Tue, 06/21/2016 - 14:41

Elon Musk's open source OpenAI: We're working on a robot for your household chores
OpenAI, the artificial-intelligence non-profit backed by Elon Musk, Amazon Web Services, and others to the tune of $1bn, is working on a physical robot that does household chores.
The robot OpenAI is targeting would be as reliable, flexible, and intelligent as Rosie the maid from TV cartoon comedy The Jetsons.
OpenAI leaders Musk, Sam Altman, Ilya Sutskever, and Greg Brockton explain in a blogpost that they don't want to manufacture the robot itself, but "enable a physical robot ... to perform basic housework".
This would be a general-purpose robot along the lines of BRETT, Berkley University's Berkeley Robot for the Elimination of Tedious Tasks, which is being trained using a combination of deep learning and reinforcement learning, a field of AI covering decision making and motor control through trial and error, based on rewards and punishments.
These combined AI techniques have been used by Google's DeepMind researchers to train its agents to master Atari games and more recently navigate virtual 3D spaces and solve more complex puzzles, such as teaching a virtual ant how to play soccer.
OpenAI says it is "inspired" by DeepMind's work in this field, displayed by its Atari games, and AlphaGo's victory over human Go masters.
DeepMind last week highlighted that it used deep-reinforcement learning to train its agent to play multiple Atari games. OpenAI says it wants to "train an agent capable enough to solve any game", but notes significant advances in AI will be required for that to happen.
OpenAI's recently-opened Gym Beta is targeting advances in reinforcement learning, because it is achieving good results in varied settings without algorithms needing to make too many assumptions.
The group also plans to build an agent that can understand natural language and seek clarification when following instructions to complete a task. OpenAI plans to build new algorithms that can advance this field.
"Today, there are promising algorithms for supervised language tasks such as question answering, syntactic parsing and machine translation but there aren't any for more advanced linguistic goals, such as the ability to carry a conversation, the ability to fully understand a document, and the ability to follow complex instructions in natural language," OpenAI noted.
Finally, OpenAI wants to measure its progress across games, robotics, and language-based tasks. OpenAI's Gym Beta will be used to serve this function.
Musk and co launched OpenAI in 2015 as an open non-profit to act as a counter-balance to huge investments in AI by corporations like Google, Microsoft and Facebook.
Full text of the article by Liam Tung at ZDNet.
Like BRETT, Berkley University's Berkeley Robot for the Elimination of Tedious Tasks, OpenAI's design would be a general-purpose robot. Image: University of California, Berkeley
 

Living in the robotic age

Robotics VO - Tue, 06/14/2016 - 17:07

Federal government-wide National Robotics Initiative (NRI) marks five years of multi-agency effort to accelerate the research, development and use of robots that work beside or cooperatively with people.
Robots are about to transform how we live and work. Decades of federally-supported science and engineering research enabled us to reach this point.

The idea of universal robots has been around for almost a century, but it is only in the last few years that robots of all kinds have begun to enter our day-to-day lives, acting in close proximity to people.

Self-driving cars have driven more than 1.5 million miles; robotic surgical tools have assisted physicians in more than 1.75 million procedures; and personal and domestic robots are owned by more than 14 million consumers worldwide — all predicated on fundamental research supported by the U.S. government.

The multibillion-dollar global market for robotics, dominated for decades by industrial uses, is beginning to see a shift toward new consumer and workplace applications as robots are increasingly used in homes, hospitals, on farms and even in space. The number of cooperative robots, or co-robots, that work beside and with humans will only grow in the coming years.

For more than four decades, researchers — many of them funded by the federal government — have explored how to help machines interpret their environment and humans’ instructions so they may operate safely and reliably alongside people. Human actions that seem intuitive — from how to grasp and set things down to how to traverse uneven ground — presented significant challenges for machines.

Videos from the MIT AI Film Archive show some of the early history of robotics research.

Dozens of basic research breakthroughs in areas ranging from sensing and cognition to power and mobility were required for researchers to develop sufficiently capable and robust robots to perform tasks in the unstructured environment of the real world.

Some of these breakthroughs were intended for robotics, but others — in algorithms, materials and systems research — were for more general purposes and enabled capabilities we never expected, from robotic muscles to machines small enough to be ingested.

The combination of all these research advances has brought us to a point where many in the robotics and business communities anticipate rapid advances that can be applied to a range of new problems and environments, allowing humans and robots to work even more collaboratively and synergistically.

 President Obama announcing the National Robotics Initiative at Carnegie Mellon University’s National Robotics Engineering Center on June 24, 2011. (Credit: Carnegie Mellon University)

In 2011, President Obama announced the National Robotics Initiative (NRI)  — a multi-agency collaboration among the National Science Foundation (NSF), NASA, the National Institutes for Health (NIH), and the National Institute of Food and Agriculture (NIFA) within the U.S. Department of Agriculture — to accelerate the development of next-generation robots that can solve problems in areas of national priority, including manufacturing, sustainable agriculture, space and undersea exploration, health, transportation, personal and homeland security, and disaster resiliency and sustainable infrastructure. The Department of Defense (DOD) and the Department of Energy (DOE) joined the initiative in 2014 and 2015, respectively.

The long-term vision of the NRI is to integrate co-robots safely in our everyday lives so that they can help us at work and at home, assisting with difficult or dangerous tasks, from construction to demolition, and supplementing human speed and vision.

Then and now, the focus has also been on developing robots that can help improve our economy. As the President said in his speech announcing the initiative, “As futuristic, and let’s face it, as cool as some of this stuff is … this partnership is about new cutting-edge ideas to create new jobs, spark new breakthroughs and reinvigorate American manufacturing today.”

Through the NRI, NSF and federal agency partners have funded more than 230 projects in 33 states, with an investment totaling more than $135 million. These projects have led to robots that can inspect bridges, monitor water quality and even aid in future space missions.

The National Robotics Initiative supports Carnegie Mellon University robotics researchers studying how to use drones to monitor infrastructure. (Credit: CMU)

They have led to the development of wearable robotic devices that improve the quality of life for people with disabilities and protect our nation’s workforce from harm, whether from hazardous materials or repetitive injury. And they have advanced the state of the art in autonomous vehicles, catalyzed widespread interests in soft robotics, and jump-started efforts to develop robots for tutoring and educational development.

 Ekso Bionics, supported by NRI, has developed robotic exoskeletons for use by individuals who have had strokes or spinal cord injuries. (Credit: Ekso Bionics)

Perhaps most importantly, NRI brought together disparate research communities — catalyzing new collaborations and advances — and inspired scientists working on fundamental research questions to consider their work in the context of specific domains such as agriculture, health, space, defense and hazard reduction.

On June 9th, the Congressional Robotics Caucus hosted an event in Washington, D.C., at which leading thinkers from industry, academia and government discussed the advances of the last five years, and research teams demonstrated today’s cutting-edge robotics designs, from coordinated robot swarms to exoskeletons that can help paralyzed people walk.

 

Jnaneshwar Das from the GRASP Laboratory at the University of Pennsylvania exhibited robotic systems that can improve the efficiency and yield of farm operations at a Congressional Robotics Caucus-sponsored event.

The progress over the last five years has been astonishing, but it’s only a glimpse of what can be accomplished through this collaboration.

The robotics research community is currently hard at work developing a roadmap that outlines the research still needed to create robots that can work safety and efficiently with people for a variety of uses — assisting blind travelers, helping autistic children learn, letting the elderly remain in their homes — while also enabling robots to work in places where humans can’t go, whether it’s into precarious rubble after a disaster, the depths of our seas, or even the distant parts of our galaxy.

 The Baxter robot hands off a cable to a human collaborator — an example of a co-robot in action. (Credit: Aaron Bestick, UC Berkeley)

With coordinated federal effort, the National Robotics Initiative has charted a path forward for the development of collaborative robots, one in which we can interact safely and naturally with robots as part of our everyday lives. Continuing investment will allow the brightest minds in robotics to tap novel research opportunities and explore new avenues for tomorrow’s co-robots, increasing the nation’s economic competitiveness and enhancing our quality of life.

Lynne Parker, National Science Foundation

Robert Ambrose, NASA

Grace Peng, National Institutes of Health

Daniel Schmoldt, U.S. Department of Agriculture

Reza Ghanadan, U.S. Department of Defense

Rodrigo Rimando, U.S. Department of Energy

Terah Lyons, White House Office of Science and Technology Policy, Executive Office of the President

 

Jennifer Tomasino likes your link: "At Georgia Tech, Your Robot Butler Is..."

Facebook - Fri, 06/03/2016 - 19:17
Jennifer Tomasino likes your link: "At Georgia Tech, Your Robot Butler Is..."

Michael Novitzky likes your link: "Our researchers are making strides in..."

Facebook - Thu, 06/02/2016 - 22:11
Michael Novitzky likes your link: "Our researchers are making strides in..."

Camp Peavy likes your link: "At Georgia Tech, Your Robot Butler Is..."

Facebook - Thu, 06/02/2016 - 10:13
Camp Peavy likes your link: "At Georgia Tech, Your Robot Butler Is..."

David SV likes your link: "Our researchers are making strides in..."

Facebook - Wed, 06/01/2016 - 15:53
David SV likes your link: "Our researchers are making strides in..."

Tesca Fitzgerald, Brad Geving and 7 other people like your link: "Our researchers are making strides in..."

Facebook - Tue, 05/31/2016 - 15:39
Tesca Fitzgerald, Brad Geving and 7 other people like your link: "Our researchers are making strides in..."

Camp Peavy reacted to your link: "Our researchers are making strides in..."

Facebook - Tue, 05/31/2016 - 14:35
Camp Peavy reacted to your link: "Our researchers are making strides in..."

백준호 likes your link.

Facebook - Sat, 05/28/2016 - 07:37
백준호 likes your link.

Pages

Subscribe to Institute for Robotics & Intelligent Machines at Georgia Tech aggregator