Elon Musk on Tesla fully autonomous car: ‘What we’ve got will blow people’s minds, it blows my mind… it’ll come sooner than people think’
During a conference call today, Elon Musk talked about Tesla’s progress in level 4 fully autonomous driving and while he didn’t want to make an announcement on the call, he said that it is coming sooner than people think:, “What we’ve got will blow people’s minds, it blows my mind …it’ll come sooner than people think.”
Musk’s most recent prediction placed the technology being ready in Q4 2017, around the time Model 3 will enter production.
Of course, Musk always emphasises that the technology will be ready long before regulators will approve the technology for the public.
But as Tesla showed during the implementation of the Autopilot, the automaker introduces the necessary hardware in its fleet and adds features later through over the air software updates.
Not one to back away from making outlandish claims, Musk was particularly enthusiastic about Tesla’s prospect in self-driving today:
“It blows me away the progress we are making. And if it blows me away, it’s really going to blow away other people too when they see it for the first time.”
When asked whether level 4 autonomous driving is a hardware or software problem at the moment, he said that the hardware necessary for self-driving currently exist, it’s only a question of software.
Elon added that Tesla is focusing on developing an advanced narrow AI and improving its advanced neural maps. He also added the system needs to be able to run on a “reasonably small computer that can fit in the car.”
An announcement is expected by the end of the year. In June, when asked directly if the Model 3 will be autonomous, Musk looked like he was about to give a direct answer, but after a few seconds of hesitation he said that there will be another big event “maybe toward the end of the year” during which he will talk in more details on the subject.
He then added that it will be “really big news” when he starts talking about it and that Tesla will do the “obvious thing”.
The “obvious thing” is now expected to be fully self-driving technology. With all the hype, anything else would be somewhat disappointing.
Airships, which are distinct from blimps by being much more rigid and sounding much less silly, are one of those unusual technologies that has been undergoing a resurgence recently after falling out of favor half a century ago. Airships have potential to be a very practical and cost effective way to move massive amounts of stuff from one place to another place, especially if the another place is low on infrastructure and has a reasonable amount of patience.
Lockheed Martin’s Skunk Works has been developing a particular kind of airship called a hybrid airship, which uses a combination of aerodynamics and lifting gas to get airborne, for the last decade or so. The P-791 technology demonstrator first flew in 2006, and a company called Hybrid Enterprises is taking Lockheed’s airship technology to commercialization. Their LMH-1 will be able to carry over 20,000 kilograms of whatever you want, along with 19 passengers, up to 2,500 kilometers, and it’s going to be a real thing: Hybrid Airships recently closed a US $480 million contract to built 12 of them for cargo delivery.
As part of the construction and ongoing maintenance of an airship, it’s important to inspect the envelope (the chubby bit that holds all the helium) for tiny holes that, over time, can have a significant impact on the airship’s ability to fly. The traditional way to do this involves humans, and like most things involving humans, it’s an expensive and time consuming process. To help out, Lockheed Martin has developed “Self-Propelled Instruments for Damage Evaluation and Repair,” or SPIDERs, which are teams of robots that can inspect airship skins for holes as well as representing one of the less ludicrous robot acronyms that we’ve seen recently.
For details on SPIDER, we spoke with hybrid airship engineer Ben Szpak about where the idea came from, how the robot works, and what their plans are for the future.
IEEE Spectrum: Can you tell us about this history of this project? Where’d you get the idea for the design of the robot, and were there other designs that you tried before finalizing this one?
Ben Szpak: Airships have been inspected for pinholes in the same way ever since they were first designed. A crew of workers partially inflates the envelope and locates holes with a bright light while the team on the inside patches them up.
The SPIDER concept originated when the P-791 Hybrid Airship demonstrator flew 10 years ago. The idea of eliminating the pinhole check, a serial step in the production schedule, is attractive when you are producing a significant number of large airships each year. SPIDER allows this check to be performed in parallel with the airships final assembly.
The SPIDER design grew organically through many iterations into the robust and simple approach you see in SPIDER. We tested everything from stabilized two wheeled batons to centrally pivoting four wheeled designs before settling on our design. We were able to use our expertise in advanced manufacturing to 3D print parts on demand, allowing us to rapidly build and test new designs and learn from our successes and failures.
Why is the task that SPIDER performs important, and why did you decide that a team of robots was the best way to tackle it?
Helium is a very small molecule and a lot of effort is put into developing helium-tight fabrics for the envelope. Pinholes can be a huge problem if they aren’t eliminated, and they are hard to detect with the human eye. The process of patching the pinholes over a large envelope is also very time consuming and tedious. Robots have the advantage of running continuously and operating on the bottom, top, and sides of a fully inflated envelope where it can be difficult for people to reach. Since SPIDERs have evolved to a rather small robot in order to operate over curved surfaces and climb the envelope, a team of SPIDERs allow for an even faster inspection and repair process.
What are some unique challenges about operating robots across the skin of an airship, and how does SPIDER solve them?
SPIDER has to operate over a non-uniformly curved surface while also propelling itself up, down, and upside-down. We handle the curved surface by designing the chassis to comply with the envelope, twisting and bending slightly to ensure a tight coupling to its mating half. The challenge of driving in strange orientations on the envelope and still accurately measuring movement is done using optical encoders watching the envelope pass rather than shaft encoders which are susceptible to wheel slippage. We also use the known shape of the envelope to approximately locate the robot on the envelope with accelerometers.
“We’ve learned a lot about autonomous inspection and repair with SPIDER, and . . . we are working on more ideas, like SPIDERs roaming around in-flight for larger airships.”—Ben Szpak, Lockheed Martin
How many robots (and how much time) does it take to cover the airship, and how does the patching mechanism work? Are these robots that would be monitoring the airship continuously, or would they be deployed to perform maintenance at specific intervals?
We expect to utilize five to six SPIDERS on the LMH-1 vehicle, which has roughly 80,000 square feet of envelope, in less than five days depending on the number of pinholes found. The manual process of locating and patching pinholes can take about ten days and doesn’t happen in parallel with the production process, which is a major benefit of SPIDER. The patching mechanism works similarly to a handheld label applicator, applying a patch over the hole once the SPIDER is positioned. We will deploy the SPIDERS during airship final assembly and at major maintenance checks.
Can you talk about the future of SPIDER? Are there ways in which you’d like to upgrade or improve the robots, or will the success of these robots lead to other robotic systems being developed in this space?
We’ve learned a lot about autonomous inspection and repair with SPIDER, and those lessons will be applicable as more opportunities arise in this field. We are working on more ideas, like SPIDERs roaming around in-flight for larger airships.
Anything else cool about these robots that you can tell us?
We were able to build SPIDER with almost entirely off-the-shelf components, while designing our own critical parts, and integrating them together. This demonstrates the amazing growth of the robotics community in the past few years and demonstrates that with the right applications robotics can be used to solve a vast set of challenges.
Delphi Automotive was the first car company to complete a cross-country road trip using autonomous technology, driving from San Francisco to New York in April 2015. Next, the company will test the technology on an international stage.
Executives said Monday the company has entered into a strategic partnership with the government of Singapore to test autonomous vehicles. Over the course of the three-year pilot project, the global automotive supplier will conduct trials of low-speed, mobility-on-demand vehicles along predetermined routes, gradually introducing systems with greater capability.
Starting this week, the same Audi SQ5 that crossed America will be begin driving in Singapore’s “one-north” business park. Early next year, Delphi intends to transition into a second phase of the project, which will involve six electric vehicles outfitted with autonomous technology operating in ride-hailing operations. Then in 2018 or 2019, executives say they intend to test fully self-driving vehicles—with no drivers as fail-safes—on Singapore’s streets.
Should all go as scheduled, Delphi says it would then move onto grander plans for production of driverless vehicles capable of shuttling on-demand customers or cargo by 2022.
With Apple’s plans in the realm of self-driving cars still a secret, Google’s ever-evolving, and other automakers offering nebulous timelines toward full autonomy, the Delphi announcement marks a milestone by offering a concrete time frame for reaching Level 4 autonomy, in which cars can complete entire journeys without human intervention, for mass-market purposes.
Singapore selected Delphi via a closed-bid process. “This is an exciting opportunity for us,” Glen DeVos, the company’s vice president for engineering, tells Car and Driver. “We think this is an important but appropriate step in developing automated mobility on demand. It will be in specific areas, so think of it as geo-fenced. That limits the complexity of the deployment. We expect we will learn a tremendous amount, and by the end of the project, we can move onto the next phase, which assuming success, is to operationalize.”
DeVos isn’t saying which brand of electric cars the company will utilize for the initial part of the project. Going forward, he sees a symbiotic link between autonomous technology and electric vehicles. The self-driving technology itself can be used in any vehicle, from cars, buses, and taxis to trucks and purpose-built mobility pods.
Singapore may be most interested in the latter. The city-state’s Land Transport Authority, in the process of expanding its rail network by 2030, wants to examine how autonomous cars could help transportation planners solve the so-called “first and last mile” dilemma—how to help commuters get from their homes or offices to and from mass-transit options. Government officials hope they can reduce overall traffic congestion and vehicle pollution by encouraging more people to use public transit. Delphi will design vehicles tailored to that purpose.
“From a user-experience standpoint, that’s a very specific use case and a very different expected experience than using your own car,” DeVos said. “So we need to make sure we design that experience so it’s enjoyable, safe, and effective. On one hand, the technology has to be there. But it has to be a really enjoyable experience for the end customer, so they get really excited about it.”
Beyond evaluating the technology itself, the other component of the project both Delphi and Singapore are focusing on is the cost of autonomous operations. In city environments, the cost of operating a vehicle with a commercial driver can exceed $3 per mile, according to Delphi estimates. Autonomy offers the potential, the company says, to reduce those expenses to as little as 90 cents per mile. Such savings could offer tremendous potential for a ride-hailing service like Uber or Lyft, not to mention delivery services like FedEx or UPS. As the pilot project develops, one key question Delphi will attempt to answer is which will be ready for autonomous operations first: the passenger fleet or the cargo-hauling market.
“That’s part of what the market is sorting out right now,” DeVos said. “Some OEMs are making investments in one area and not the other, necessarily. There’s a real divergence of opinion. We believe it will come from one of the delivery methods for a big logistics company. They’ll absolutely be looking at how to take advantage.”
Whenever Delphi is ready to move into a stage of producing driverless pods, it says it’s open to an alliance with an OEM or another company.
The pilot project in Singapore is one of several concurrent steps toward testing autonomous technology for Delphi. The company says it plans two similar pilot projects for North America and Europe, but details on those aren’t yet firm.
Full text of the article in Car and Driver Blog.
Elon Musk's open source OpenAI: We're working on a robot for your household chores
OpenAI, the artificial-intelligence non-profit backed by Elon Musk, Amazon Web Services, and others to the tune of $1bn, is working on a physical robot that does household chores.
The robot OpenAI is targeting would be as reliable, flexible, and intelligent as Rosie the maid from TV cartoon comedy The Jetsons.
OpenAI leaders Musk, Sam Altman, Ilya Sutskever, and Greg Brockton explain in a blogpost that they don't want to manufacture the robot itself, but "enable a physical robot ... to perform basic housework".
This would be a general-purpose robot along the lines of BRETT, Berkley University's Berkeley Robot for the Elimination of Tedious Tasks, which is being trained using a combination of deep learning and reinforcement learning, a field of AI covering decision making and motor control through trial and error, based on rewards and punishments.
These combined AI techniques have been used by Google's DeepMind researchers to train its agents to master Atari games and more recently navigate virtual 3D spaces and solve more complex puzzles, such as teaching a virtual ant how to play soccer.
OpenAI says it is "inspired" by DeepMind's work in this field, displayed by its Atari games, and AlphaGo's victory over human Go masters.
DeepMind last week highlighted that it used deep-reinforcement learning to train its agent to play multiple Atari games. OpenAI says it wants to "train an agent capable enough to solve any game", but notes significant advances in AI will be required for that to happen.
OpenAI's recently-opened Gym Beta is targeting advances in reinforcement learning, because it is achieving good results in varied settings without algorithms needing to make too many assumptions.
The group also plans to build an agent that can understand natural language and seek clarification when following instructions to complete a task. OpenAI plans to build new algorithms that can advance this field.
"Today, there are promising algorithms for supervised language tasks such as question answering, syntactic parsing and machine translation but there aren't any for more advanced linguistic goals, such as the ability to carry a conversation, the ability to fully understand a document, and the ability to follow complex instructions in natural language," OpenAI noted.
Finally, OpenAI wants to measure its progress across games, robotics, and language-based tasks. OpenAI's Gym Beta will be used to serve this function.
Musk and co launched OpenAI in 2015 as an open non-profit to act as a counter-balance to huge investments in AI by corporations like Google, Microsoft and Facebook.
Full text of the article by Liam Tung at ZDNet.
Like BRETT, Berkley University's Berkeley Robot for the Elimination of Tedious Tasks, OpenAI's design would be a general-purpose robot. Image: University of California, Berkeley
Federal government-wide National Robotics Initiative (NRI) marks five years of multi-agency effort to accelerate the research, development and use of robots that work beside or cooperatively with people.
Robots are about to transform how we live and work. Decades of federally-supported science and engineering research enabled us to reach this point.
The idea of universal robots has been around for almost a century, but it is only in the last few years that robots of all kinds have begun to enter our day-to-day lives, acting in close proximity to people.
Self-driving cars have driven more than 1.5 million miles; robotic surgical tools have assisted physicians in more than 1.75 million procedures; and personal and domestic robots are owned by more than 14 million consumers worldwide — all predicated on fundamental research supported by the U.S. government.
The multibillion-dollar global market for robotics, dominated for decades by industrial uses, is beginning to see a shift toward new consumer and workplace applications as robots are increasingly used in homes, hospitals, on farms and even in space. The number of cooperative robots, or co-robots, that work beside and with humans will only grow in the coming years.
For more than four decades, researchers — many of them funded by the federal government — have explored how to help machines interpret their environment and humans’ instructions so they may operate safely and reliably alongside people. Human actions that seem intuitive — from how to grasp and set things down to how to traverse uneven ground — presented significant challenges for machines.Videos from the MIT AI Film Archive show some of the early history of robotics research.
Dozens of basic research breakthroughs in areas ranging from sensing and cognition to power and mobility were required for researchers to develop sufficiently capable and robust robots to perform tasks in the unstructured environment of the real world.
Some of these breakthroughs were intended for robotics, but others — in algorithms, materials and systems research — were for more general purposes and enabled capabilities we never expected, from robotic muscles to machines small enough to be ingested.
The combination of all these research advances has brought us to a point where many in the robotics and business communities anticipate rapid advances that can be applied to a range of new problems and environments, allowing humans and robots to work even more collaboratively and synergistically.President Obama announcing the National Robotics Initiative at Carnegie Mellon University’s National Robotics Engineering Center on June 24, 2011. (Credit: Carnegie Mellon University)
In 2011, President Obama announced the National Robotics Initiative (NRI) — a multi-agency collaboration among the National Science Foundation (NSF), NASA, the National Institutes for Health (NIH), and the National Institute of Food and Agriculture (NIFA) within the U.S. Department of Agriculture — to accelerate the development of next-generation robots that can solve problems in areas of national priority, including manufacturing, sustainable agriculture, space and undersea exploration, health, transportation, personal and homeland security, and disaster resiliency and sustainable infrastructure. The Department of Defense (DOD) and the Department of Energy (DOE) joined the initiative in 2014 and 2015, respectively.
The long-term vision of the NRI is to integrate co-robots safely in our everyday lives so that they can help us at work and at home, assisting with difficult or dangerous tasks, from construction to demolition, and supplementing human speed and vision.
Then and now, the focus has also been on developing robots that can help improve our economy. As the President said in his speech announcing the initiative, “As futuristic, and let’s face it, as cool as some of this stuff is … this partnership is about new cutting-edge ideas to create new jobs, spark new breakthroughs and reinvigorate American manufacturing today.”
Through the NRI, NSF and federal agency partners have funded more than 230 projects in 33 states, with an investment totaling more than $135 million. These projects have led to robots that can inspect bridges, monitor water quality and even aid in future space missions.The National Robotics Initiative supports Carnegie Mellon University robotics researchers studying how to use drones to monitor infrastructure. (Credit: CMU)
They have led to the development of wearable robotic devices that improve the quality of life for people with disabilities and protect our nation’s workforce from harm, whether from hazardous materials or repetitive injury. And they have advanced the state of the art in autonomous vehicles, catalyzed widespread interests in soft robotics, and jump-started efforts to develop robots for tutoring and educational development.Ekso Bionics, supported by NRI, has developed robotic exoskeletons for use by individuals who have had strokes or spinal cord injuries. (Credit: Ekso Bionics)
Perhaps most importantly, NRI brought together disparate research communities — catalyzing new collaborations and advances — and inspired scientists working on fundamental research questions to consider their work in the context of specific domains such as agriculture, health, space, defense and hazard reduction.
On June 9th, the Congressional Robotics Caucus hosted an event in Washington, D.C., at which leading thinkers from industry, academia and government discussed the advances of the last five years, and research teams demonstrated today’s cutting-edge robotics designs, from coordinated robot swarms to exoskeletons that can help paralyzed people walk.
Jnaneshwar Das from the GRASP Laboratory at the University of Pennsylvania exhibited robotic systems that can improve the efficiency and yield of farm operations at a Congressional Robotics Caucus-sponsored event.
The progress over the last five years has been astonishing, but it’s only a glimpse of what can be accomplished through this collaboration.
The robotics research community is currently hard at work developing a roadmap that outlines the research still needed to create robots that can work safety and efficiently with people for a variety of uses — assisting blind travelers, helping autistic children learn, letting the elderly remain in their homes — while also enabling robots to work in places where humans can’t go, whether it’s into precarious rubble after a disaster, the depths of our seas, or even the distant parts of our galaxy.The Baxter robot hands off a cable to a human collaborator — an example of a co-robot in action. (Credit: Aaron Bestick, UC Berkeley)
With coordinated federal effort, the National Robotics Initiative has charted a path forward for the development of collaborative robots, one in which we can interact safely and naturally with robots as part of our everyday lives. Continuing investment will allow the brightest minds in robotics to tap novel research opportunities and explore new avenues for tomorrow’s co-robots, increasing the nation’s economic competitiveness and enhancing our quality of life.
Lynne Parker, National Science Foundation
Robert Ambrose, NASA
Grace Peng, National Institutes of Health
Daniel Schmoldt, U.S. Department of Agriculture
Reza Ghanadan, U.S. Department of Defense
Rodrigo Rimando, U.S. Department of Energy
Terah Lyons, White House Office of Science and Technology Policy, Executive Office of the President
A group of underwater archaeologists exploring the sunken remains of King Louis XIV's flagship La Lune added a very special member to their dive team recently. OceanOne, a Stanford-developed humanoid diving robot with "human vision, haptic force feedback and an artificial brain," made its maiden voyage alongside human divers to recover 17th-century treasures from bottom of the Mediterranean.
Stanford's five-foot "virtual diver" was originally built for studying coral reefs in the Red Sea where a delicate touch is necessary, but the depths go well beyond the range of meat-based divers. The "tail" section contains the merbot's onboard batteries, computers and array of eight thrusters, but it is the front half that looks distinctly humanoid with two eyes for stereoscopic vision and two nimble, articulated arms.
Those arms are what make OceanOne ideal for fragile reef environments or priceless shipwrecks like La Lune, which sank off the coast of France over 350 years ago and hasn't been touched until now. Force sensors in each wrist transmit haptic feedback to the pilot, allowing them to feel the object's weight while staying high and dry on a dive ship. The robot's "brain" works with the tactile sensors to ensure the hands don't crush fragile objects, while the navigation system can automatically keep the body steady in turbulent seas.
With such a nimble platform, OceanOne will also prove useful in dangerous undersea environments like the Fukushima Daichi site that claimed five other robot divers. Suddenly, the imploded ghost of Nereus is looking downright clunky.
Full text of this article in Engadet.
Boston Dynamics has just posted an incredible video showcasing a massively upgraded version of the ATLAS robot that they initially developed for the DARPA Robotics Challenge. While BD calls this the “next generation” of ATLAS, it looks like such an enormous technological leap forward that it’s more like a completely different species.
Full text of the article by Evan Ackerman and Erico Guizzo in IEEE SPECTRUM
The Defense Department’s agency devoted to cutting-edge technologies, the Defense Advanced Research Projects Agency, has big plans for 2017, including the launch of a 130-foot autonomous ship that will begin sailing the seas this year.
The Obama administration requested $2.973 billion for DARPA for fiscal year 2017, the same amount in its 2016 request, and $105 more than what was appropriated, said DARPA Director Arati Prabhakar. That amounts to only 2 percent of federal R&D expenditures, but the organization has had a large impact, she added.
“We are an organization that has been designed from the beginning to take risk and manage risk in pursuit of off-scale impact,” she said.
The funding will go toward three major strategic areas: rethinking complex military systems; mastering the information explosion; and developing the seeds of new technological surprise, she told reporters Feb. 10 during a briefing at DARPA headquarters in Arlington, Virginia.
As for the first category, Steve Walker, DARPA deputy director, said, “We need to continue to think how to build highly capable military systems, especially to prepare for fights with highly capable adversaries.”
The military can’t continue to rely on big, monolithic weapons systems that take years to develop. It will never have them in time or in the numbers required to fight advanced adversaries, Walker said.
“We need to mix it up. We need to build war-fighting architectures that are more heterogeneous in nature, hard to target and rely on smaller and cheaper microelectronics technologies," he said.
One example is the Anti-Submarine Warfare Continuous Trail Unmanned Vessel which will be the largest unmanned surface vehicle ever built at 130-feet long, Walker said. It will be christened in April in Portland, Oregon, and then begin to demonstrate its long-range capabilities over 18 months in cooperation with the Office of Naval Research and the Space and Naval Systems Warfare Command.
“Imagine an unmanned surface vessel following all the laws of the sea on its own and operating with manned surface and unmanned underwater vehicles,” Walker said.
“We think the real cost savings will be in operating this vessel at sea compared to how we operate vessels today,” he added. It could be used for counter-mine missions, reconnaissance and resupply, he added.
Another program is the distributed battle management system, which is intended to exchange information even in a jammed environment, he said.
DARPA also has a goal to launch 100 satellites in a 10-day period with its XS-1 reusable launch system. “You can imagine an adversary doing something offensive in space and having that kind of responsive capability,” Walker said.
In the commercial world, there would be tremendous savings for launching that number of spacecraft in that amount of time, he added. More and more, the technologies the agency develops aren’t exclusively for the military. DARPA was an early backer of SpaceX and its Falcon rocket system. It’s a commercial launch provider, but its lower cost services are now benefitting the military, he noted.
The second category, mastering the information explosion, is separated into two categories: big data for security problems and cybersecurity, Prabhakar said.
In big data, DARPA wants to “empower the end users of data with tools that allow them to create deep value from all these bytes that are just flying at them,” she said. The MUSE program, Mining and Understanding Software Enclaves, is a big data approach to writing software.
As for cybersecurity, “attack entry points are growing,” she said. “We don’t think there is a silver bullet but we believe that major advances can be made.”
One idea is to “take whole classes of vulnerabilities off the table” in the High Assurance Cyber Military System. Defense Department systems are “chock full of embedded processors,” she said. They may not appear to be connected to the network, but there are ways in, Prabhakar said.
One experiment created a new secure microkernel chip, which was placed in the mission computer of a Little Bird helicopter for a “cyber retrofit.” The chip gave the computer a new security foundation. A red team of hackers tried everything they could to get in the system, but couldn’t. “We made it as easy for them as we could and even gave them source code, but they were not able to break into the system,” Prabhakar said. They were then given full access to one of the applications, a camera, but still couldn’t bust into the mission computer, she said.
The third category, “developing the seeds of new technological surprise,” comprises many different disciplines and programs, she said. The AC/DC program, for example, turns chemical weapons into fertilizer, she said.
In neuroscience, the advanced prosthetics program recently had a big breakthrough when a paralyzed man — with the assistance of electrodes implanted in his cortex — could receive signals from a prosthetic arm. That allowed him to have the sensation of touch for the first time, she said.
Prabhakar said she was excited about the possibilities of the new Revolutionary Enhancement of Visibility Exploiting Active Light Fields, or REVEAL, program.
Photons that hit cameras or sensors register light intensity and spectral information. But those photons could have come from different light sources. As they go through the environment, they are bouncing off and interacting with all sorts of surfaces, she said.
“By the time they get to that sensor, they have actually had these rich and full lives,” Prabhakar said.
REVEAL will see if sensors can capture much more of the information that those photons are bringing — time of arrival, angle of arrival and other characteristics — which may show what is behind an object, or be able to create 3D models of the scene, she said.
Full text of this article by Stew Magnuson in National Defence.
The 2015 National Robotics Initiative (NRI) PI Meeting which was held in November turned to be another successfull meeting of the robotics community. Please visit the web site of the organizers to review published presentations and posters.
The "Rapid Fire" research summaries of the NRI-funded reserach projects will be published on our web site at the later time.
Full Proposal Deadline(s) (due by 5 p.m. proposer's local time):
March 07, 2016
January 12, 2017
Second Thursday in January, Annually Thereafter
This solicitation NSF 16-517 is a revision of NSF 15-505, the solicitation for the National Robotics Initiative (NRI). The corresponding National Institutes of Health (NIH) notification, NIH Guide Notice NOT-EB-15-008 (http://grants.nih.gov/grants/guide/notice-files/NOT-EB-13-005.html), is being updated in parallel with this solicitation.
Below are several important points for FY 2016 NRI submissions:
- The U.S. Department of Energy Office of Environmental Management (DOE/EM) has joined the NRI. For a detailed statement of their interests, see section II.A.2. Sponsoring Agency Mission Specific Research.
- The Air Force Office of Scientific Research (AFOSR) has provided its research interests relevant to the NRI. For details, see section II.A.2. Sponsoring Agency Mission Specific Research.
- In the context of NRI, The National Institutes of Health (NIH) is interested in proposals in the area of assistive robotics. NIH will not review proposals submitted on topics in surgical robotics, prosthetics, or exoskeletons, in response to the NRI solicitation. For a detailed statement of NIH’s interests, see section II.A.2. Sponsoring Agency Mission Specific Research. In addition to hypothesis-driven research, NIH also supports non-hypothesis-driven applications, which includes technology-driven and problem-driven applications.
- The Defense Advanced Research Projects Agency (DARPA) has updated its research interests relevant to the National Robotics Initiative (NRI) program. For details, see section II.A.2. Sponsoring Agency Mission Specific Research.
- The research areas supported by NRI include those relating to autonomous operations of robots. This fact has been emphasized by adding a bullet on autonomy to the list of research areas listed in section II.A.1 of this solicitation.
Any proposal submitted in response to this solicitation should be submitted in accordance with the revised NSF Proposal & Award Policies & Procedures Guide (PAPPG) (NSF 16-1), which is effective for proposals submitted, or due, on or after January 25, 2016. Please be advised that proposers who opt to submit prior to January 25, 2016, must also follow the guidelines contained in NSF 16-1.
The Koenigsegg Regera features the world's first fully robotized sports car body. We call it Autoskin. Each component can be operated individually, revealing the Regera's engine bay, luggage compartment and interior. There's also a button on the remote for 'Show Mode', which you can see here. Autoskin is a super-light, practical feature that adds less than 5kg to the Regera's weight.
This is a new feature available only on the new Koenigsegg Regera and another industry first for a serial production car. It's our new robotized control system, called Autoskin, which can open and close all of the doors on the vehicle from the remote control. What you see here is "Show Mode", which opens/closes all openings at the same time. They call also be controlled individually.
In a video posted to their Facebook yesterday, Koenigsegg Automotive AB showed off their “fully robotized sports car body.”
As the world's largest car manufacturer, Toyota knows a thing or two about getting people from point A to point B. But how do you make this process easier? The answer: artificial intelligence. On November 6 Toyota announced that it's establishing a new company, the Toyota Research Institute (TRI), to develop AI technologies in two main areas: autonomous cars and robot helpers for around the home. The company plans to pump $1 billion into TRI over the next five years, and will be establishing headquarters for the company near Stanford University in California, with a secondary facility near MIT in Massachusetts.
Toyota originally announced academic partnerships with these universities last month along with $50 million in funding, but this new financing brings the company's ambitions to another level. Leading TRI as its executive technical advisor and CEO will be Dr. Gill Pratt, the man behind DARPA's Robotics Challenge. In a press release, Pratt said the company's initial goals are to decrease the likelihood of car accidents, make driving accessible to everyone "regardless of ability," and bring extra mobility to the home — "particularly for the support of seniors."
As these first comments suggest, Toyota's vision for self-driving cars differs somewhat from Google and Uber's. Rather than chasing fully autonomous vehicles that chauffeur passengers, Toyota has always stressed the importance of keeping human agency in driving while introducing computer systems that make cars safer. "Our long-term goal is to make a car that is never responsible for a crash," Pratt told IEEE Spectrum last month. "A car that is never responsible for a crash, regardless of the skill of the driver, will allow older people to be able to drive, and help prevent the one and a half million deaths that occur as a result of cars every single year around the world."
Dovetailing with this ambition is the goal of indoor mobility for seniors. Japan's rapidly aging population is a crisis in the making (the number of over-65s is expected to go from 25 percent to 40 percent in the next 30 years), and there are similar problems facing America (over-65s will be around 20 percent of the US population by 2030). Toyota has been developing robotic helpers for an aging population for years, including the Human Support Robot or HSR (which features an articulated torso and arm and video calling functionality), and prototype bots for assisted walking and moving people from the bed to the toilet. These may seem in a different world to self-driving cars, but both products rely on similar realms of AI research, including computer vision and machine learning. Conceptually, Toyota also sees a connection.
"If you think about the use of robotics within the home, it is the same as the use of vehicles when we travel on the road, except that instead of moving goods and people outdoors, you move them indoors," Pratt told IEEE Spectrum. "A lot of the same technology can be brought to bear. Toyota is also convinced that there should be a strong relationship between people and the machines that are helping them to move."
Don't expect any commercial products from TRI any time soon though. At the moment, Toyota says the company's primary mission is to "accelerate R&D in a range of fields" and help "bridge the gap between fundamental research and product development." The company has previously stated that its current ambition is to get semi-autonomous cars on the road by 2020, although some have wondered whether the Japanese firm's cautious attitude will let its Silicon Valley rivals pull ahead.
Full text of the article by James Vincent in The Verge.
It doesn't look that far removed from R2D2 in "Star Wars" -- a small, autonomous robot that wheels down a sidewalk at about 4 mph. The Starship robot has been developed by Skype co-founders Ahti Heinla and Janus Friis to be the cost-efficient grocery delivery service of the future.
Set to be piloted in several countries including the U.S. and the U.K. in 2016, the robots are meant "to fundamentally improve local delivery of goods and groceries, making it almost free," according to a Starship press release.
The company expects to launch fleets of the small robots that can deliver up to 20 pounds of groceries for $1.50 in under a half-hour. Customers select a delivery time that's convenient for them, and they are able to track the robot's progress through a mobile app. Once the Starship robot shows up, the app user is the only person who can unlock the machine's cargo and get the groceries.
The robot uses navigation software with obstacle avoidance, which allows it to drive autonomously without causing havoc on the sidewalk, but a human operator can intervene remotely to guarantee a safe delivery.
"Our vision revolves around three zeroes -- zero cost, zero waiting time, and zero environmental impact," Ahti Heinla, Starship Technologies CEO, said in the release. "We want to do to local deliveries what Skype did to telecommunications."
Heinla added that the robots "are not drones" -- instead of conspicuously flying through the air, they are earthbound, designed to blend in safely with pedestrians. They are also environmentally friendly by being carbon emission-free.
The main goal of the robot is to simplify the delivery process. Retailers ship the grocery items to a central hub where the robot fleet takes over, completing the delivery to the customer, avoiding expensive door-to-door travel costs.
"With ecommerce continuing to grow, consumers expect to have more convenient options for delivery -- but at a cost that suits them," Heinla said. "The last few miles often amounts to the majority of that total delivery cost. Our robots are purposefully designed using the technologies made affordable by mobile phones and tablets -- it's fit for purpose and allows for the cost savings to be passed on to the customer."
Full text of the article by Brian Mastroianni in CBS News.
Yamaha Motors has revealed it is developing a robot designed to ride any racing motorbike at high speeds.
The Japanese company unveiled a prototype at the Tokyo Motor Show.
At present it is reliant on human operators, but in time the firm plans to have the android make its own decisions about the best course and speed to achieve the best race time around a track.
Called the Motobot, this is a fully capable motorcycle-riding humanoid robot that unites both Yamaha's motorcycle and robotics technologies. The Motobot is able to ride entirely on its own with no human participation. So far, Yamaha has not disclosed much about the robotic system, except of a few words in a short press release.
“R&D [Research and Development] is currently underway with the goal of developing the robot to ride an unmodified motorcycle on a racetrack at more than 200 km/h,” the Yamaha statement says.
What is also known so far is that the Motobot interacts with the bike in exactly the same way that a human does, twisting its wrists to control the throttle, squeezing the clutch, changing gear and hunkering down behind the windscreen.
The only limit to its capability is that it so far needs a pair of stabilising wheels on both sides of the motorcycle to prevent the bike from tipping. It is hoped that they won’t be necessary in the future.
The Motobot has been developed to serve as an unmanned test pilot to ensure a human rider’s safety. Instead of using human riders to test unknown and potentially dangerous prototypes, Yamaha can make use of the Motobot to ride hundreds or even thousands of test laps before humans get involved.
An experimental self-driving car has set a record for an autonomous road trip in Mexico. The trip from the U.S.-Mexico border to Mexico City provided the opportunity to collect data and prepare for an even longer upcoming road trip from Reno, Nevada to Mexico City.
The autonomous car in this case was a 2010 Volkswagen Passat Variant named Autonomos. The modified, self-driving vehicle can automatically control speed, direction, and braking without human driver intervention, but it also relies upon GPS to safely follow preset routes. Researchers prepared special maps containing terabytes of data detailing the number of lanes, highway markings, exits, intersections and traffic lights.
“We covered 250 to 300 miles daily, so it took a week to arrive to Mexico City,” said Raul Rojas, a visiting professor of robotics and intelligent systems math at the University of Nevada, Reno, in a press release. “Some parts of the highway were scary, but we had no important safety incidents.”
The 2,414-kilometer (1,500-mile) Mexico road trip took place along Mexico’s Highway 15. About five percent of the route included construction work and potholes. But the bigger challenge for the self-driving car came from the lack of lane markings along lengthy stretches of highway due to repaving work over the summer.
Rojas previously tested the same car in autonomous driving mode on a 306-kilometer round trip from Berlin to Leipzig in Germany. He and his colleagues outfitted Autonomos as a “driving laboratory” with seven laser scanners, nine video cameras, seven radars and a GPS roof antenna.
For the Mexico trip, Rojas brought along three German colleagues. Everyone took turns as safety drivers; one person kept an eye on the road in the driver’s seat and one person watched the computer and navigation systems to see what moves the autonomous car planned to do next. The remaining pair of people followed in a support vehicle.
The Mexico road trip represented just one leg of an planed 6,437-kilometer trip from Reno to Mexico City. Eventually, Rojas hopes to improve the autonomous car’s ability to predict the behavior of other drivers and pedestrians. Such capabilities would go a long way toward making autonomous cars safer beyond just highway driving.
“If a human can drive with two eyes, I am sure that we will be able to drive autonomously with a computer the size of a notebook and just a handful of video cameras in just a few more years,” Rojas said.
Huge tech companies and automakers have increasingly been testing self-driving cars on public roads. But semi-autonomous features have also been creeping into existing commercial cars. For example, Tesla Motors recently uploaded new Autopilot software to its Tesla Model S vehicles. And in 2014, the Mercedes-Benz S Class already brought semi-autonomous features to the commercial car market with adaptive cruise control and automatic collision prevention.
Full text of this article by Jeremy Hsu in the IEEE Spectrum.
In China's factories, the robots are rising.
For decades, manufacturers employed waves of young migrant workers from China's countryside to work at countless factories in coastal provinces, churning out cheap toys, clothing and electronics that helped power the country's economic ascent.
Now, factories are rapidly replacing those workers with automation, a pivot that's encouraged by rising wages and new official directives aimed at helping the country move away from low-cost manufacturing as the supply of young, pliant workers shrinks.
It's part of a broader overhaul of the economy as China seeks to vault into the ranks of wealthy nations. But it comes as the country's growth slows amid tepid global demand that's adding pressure on tens of thousands of manufacturers.
With costs rising and profits shrinking, Chinese manufacturers "will all need to face the fact that only by successfully transitioning from the current labor-oriented mode to more automated manufacturing will they be able to survive in the next few years," said Jan Zhang, an automation expert at IHS Technology in Shanghai.
Shenzhen Rapoo Technology Co. is among the companies at ground zero of this transformation. At its factory in the southern Chinese industrial boomtown of Shenzhen, orange robot arms work alongside human operators assembling computer mice and keyboards.
"What we are doing here is a revolution" in Chinese manufacturing, said Pboll Deng, Rapoo's deputy general manager.
The company began its push into automation five years ago. Rapoo installed 80 robots made by Sweden's ABB Ltd. to assemble mice, keyboards and their sub-components. The robots allowed the company to save $1.6 million each year and trim its workforce to less than 1,000 from a peak of more than 3,000 in 2010.
Such upgrading underscores the grand plans China's communist leaders have for industrial robotics. President Xi Jinping called in a speech last year for a "robot revolution" in a nod to automation's vital role in raising productivity.
Authorities have announced measures such as subsidies and tax incentives over the past three years to encourage industrial automation as well as development of a homegrown robotics industry.
Some provinces have set up their own "Man for Machine" programs aimed at replacing workers with robots.
Guangdong, a manufacturing heartland in southern China, said in March it would invest 943 billion yuan ($148 billion) to encourage nearly 2,000 large manufacturers to buy robots, the official Xinhua news agency reported. Guangzhou, the provincial capital, aims to have 80 percent of manufacturing automated by 2020.
A relentless surge in wages is adding impetus to the automation revolution. China relied on a seemingly endless supply of cheap labor for decades to power its economic expansion. That equation is changing as the country's working age population stops growing and more Chinese graduate from university, resulting in a dwindling supply of unskilled workers, annual double-digit percentage increases in the minimum wage and rising labor unrest.
Deng said Rapoo's wage bill rising 15-20 percent a year was one big factor driving its use of robots.
"Frontline workers, their turnover rate is really high. More and people are unwilling to do repetitive jobs. So these two issues put the manufacturing industry in China under huge pressure," he said.
China's auto industry was the trailblazer for automation, but other industries are rapidly adopting the technology as robots become smaller, cheaper and easier to use. It now only takes on average 1.3 years for an industrial robot in China to pay back its investment, down from 11.8 years in 2008, according to Goldman Sachs.
Companies such as electronics maker TCL Corp. are using robots to produce higher-value goods. At one factory in Shenzhen, TCL uses 978 machines to produce flat screen TV panels. At another TCL plant in Hefei, near Shanghai, steel refrigerator frames are bent into shape before being plucked by a blue Yasakawa robot arm that stacks them in neat rows for further assembly.
Fridges and big washing machines have heavy internal components, so "if you use automated robots to make them, they also let you cut your labor intensity by a lot," said TCL Chairman Tomson Li.
China held the title of world's biggest market for industrial robots for the second straight year in 2014, with sales rising by more than half to 56,000, out of a total of 224,000 sold globally, according to the International Federation of Robotics.
There's plenty more room for explosive sales growth. China has about 30 robots for every 10,000 factory workers compared with 437 in South Korea and 152 in the United States. The global average is 62. Beijing wants China's number to rise to 100 by 2020.
The switch to robots has raised fears that it will contribute to slowing job though there are few signs that's happening yet.
Deng said Rapoo hasn't had to resort to layoffs. Rather, the company is just not replacing workers who quit.
"It's not simply replacing the operation of workers by robot. We do more than that. We are making a robot platform" in which humans and machines work together to make production more flexible, he said.
On a recent tour of Rapoo's factory, Deng pointed out the efficiencies.
As a conveyor belt carried circuit boards out of an industrial soldering machine, a robot arm removed them from metal jigs and placed them on another belt. Human workers typically do this job in other factories, Deng said, but turnover is high because of the heat and repetitiveness.
In a glass-walled room, robots assembled receivers for wireless mice, tasks that were previously done by 26 people, Deng said. Now, one or two humans supervise as a laser automatically fuses shut metal USB plug housings, four at a time, while steps away, robot arms slide the plugs into plastic sleeves.
Automation means "accuracy can still remain very high and there are seldom failures for the robots," said Deng.
Boosting quality also helps China's companies achieve another national goal of shedding their reputation as shoddy, low cost producers to compete with global rivals.
Automation will allow Chinese factories to grab a bigger share of industries where accuracy and precision are crucial, such as aerospace, medical devices and optical components, said Derick Louie, of the Hong Kong Productivity Council.
Makers of toys and other low-profit consumer goods, however, "probably will have to move outside of China due to rising labor costs and environmental taxation," he said.
Fulltext of the article by Kelvin Chan in ABC News.
Scientists trying to build a better robot are encouraged by the steps, however tentative, of a humanoid named Atlas.
In a video shown recently by Atlas’s makers, it is hard to miss the human in the humanoid as the 6-foot-2 machine takes a casual, careful stroll through the woods. It walks like a crouched limbo contestant (who perhaps imbibed one too many piña coladas), shuffles through a wooded area, tethered by a power cord, and then breaks into a more confident, foot-slapping walk when it reaches flat ground — much as a person would. Scientists hope to make an untethered version soon.
Atlas’s ability to be outside in the woods is one step toward developing the balance and dynamics that come naturally to humans, according to Marc Raibert, the founder of Boston Dynamics, the Google-owned research firm behind the project.
“We’re making pretty good progress on making it so that it has mobility that is sort of within shooting range of yours,” Dr. Raibert said, referencing the video at a recent conference. The video of Atlas moving through the woods was made last year with the 2013 version of the machine, a representative, Maria Silvaggi, said in an email on Tuesday.
Atlas, first publicly unveiled in 2013, received funding from the Pentagon’s Defense Advanced Research Projects Agency, but that relationship has ended, a representative said Tuesday. Scientists believe that the robot could eventually assist humans after disasters, like earthquakes and fires, going where rescuer workers cannot. But for the scientists, development can be maddeningly slow.
For Atlas, an aluminum machine weighing over 300 pounds, the training process looks grueling: Researchers kick the robot, throw weights at it or make it walk over rock beds to observe how well it adapts to challenges. On the rock bed, Atlas can be seen tottering but rushing to complete the course, a move, Dr. Raibert said, that mimics the behavior of people and animals — when we’re on unsure footing, our instinct is to keep moving forward, and fast.
Leaving the controlled setting of a lab presents its own hurdles.
“Out in the world is just a totally different challenge than in the lab,” Dr. Raibert said. “You can’t predict what it’s going to be like.”
Full story by Katie Rogers in The New York Times.
As petroleum has become harder to find, it's become increasingly costly and dangerous to extract. Could aerial data-collection bots create a new boom in fossil fuels?
Oil and gas exploration has always moved at the speed of the equipment—glacially. Productive job sites quickly get clogged with fleets of massive trucks, cranes, and rotary diggers, forcing site planners to observe the area by helicopter just to direct traffic.
For decades, this has been the only way to do business. And it’s pricey. Drilling machinery burns thousands of dollars per day in operation, and nearly as much when it sits idle. When conditions change—weather, markets, breakdowns—teams suffer a chain reaction of runaway costs only the biggest conglomerates can afford.
With such massive overhead acting as a barrier to entry, oil and gas companies have been slow to innovate around worker safety and environmental impact. But aerial drones threaten to drastically change the pace. Are American oil companies ready?
Self-piloting drones like the Boomerang are leading a small but fundamental change in the industry. In oil and gas, equipment doesn’t move without data—where to drill, how deep to go, and so on. With the traffic bottleneck removed, suddenly equipment can move more nimbly and exploration startups can get in the drilling game for a fraction of the traditional entry cost.
The impact of self-piloted drones comes in the form of speed and savings. By photographing job sites 24 hours a day in high definition, oil and gas principals get an up-to-the-minute view of how their resources are deployed—even when conditions are too dangerous for manned aviation. Instead of planning fleet movements weeks in advance, decisions about fleet movement are possible on the fly, cutting costs and making management more responsive. Though oil and gas are becoming increasingly hard extract in the U.S., dynamic job site monitoring is one of a handful of technologies that could keep domestic exploration competitive with overseas oil.
The Boomerang self-piloting drone works like consumer drones, but with one key feature: it requires zero maintenance. After surveying several square miles of terrain, the three-foot-wide quadcopter can pilot itself back to a docking station where it self-installs a fresh battery pack. Other industrial drones like the Spektre can even make 3D maps of dangerous sites, forgoing the need for human workers to analyze the data once the drone is done surveying.
When combined with other technologies like additive manufacturing and advanced seismography, drones-as-a-platform can become a fulcrum point for much larger industry disruptions. Should a drone report broken machinery, its stereoscopic vision could dispatch an order for a 3D-printed replacement part right on site.
Fitted with advanced seismic sensors, drones could even replace exploration teams entirely, recording subterranean data at high sensitivity from hundreds of feet in the air. These capabilities entail a big shake-up for one of the world’s most entrenched industries—with less strip mining, fewer accidents, and cheaper fuel for the rest of us.
Full story in HP Matter.
Facebook just built a gigantic solar-powered drone that will stay in the stratosphere for months at a time, beaming broadband Internet to rural and hard-to-reach areas.
The drone, called Aquila, is the baby of Facebook's (FB, Tech30) year-old Connectivity Lab. The lab has been developing new technology as part of the social network's mission to "connect everybody in the world."
Four billion people don't have access to the Internet, and 10% of the world's population lacks the necessary infrastructure to get online. To reach these people, Facebook is working on drones, satellites, lasers and terrestrial Internet technology.
On Thursday, Facebook announced it had finished construction on its first full-sized drone and announced other project milestones. The team's researchers say they've found a way to use lasers to deliver data speeds from the drones ten times faster than the industry standard.
Facebook has been working on the Aquila for a year, building off of technology it acquired when it bought UK drone company Ascenta in 2014. The solar-powered unmanned aircraft is designed to fly far above commercial airspace and weather, and to stay in the air for three months at a time. It could give Internet access to people located in a 50-mile radius on the ground.
"It's sort of like a backbone of Internet using lasers in the sky, that's the dream we have," said Yael Maguire, the engineering director of Facebook's Connectivity Lab.
Aquila hasn't taken flight yet, but the UK-based team has done flight testing on a number of scale models. Over the next six months, the group will run structural and other tests and eventually take it for its first test flight.
The technology is years away from being used in the field -- Facebook doesn't yet have an exact timeline.
The Aquila drone looks like a giant v-shaped boomerang. It's 140-feet in diameter -- about the same wingspan as a Boeing 737 -- and covered in solar cells. It is made of light carbon fiber that is two to three times stronger than steel when cured. It will weigh around 880 pounds when fully outfitted with motors, batteries and communications equipment.
It won't require a runway. The Aquila will be launched by tethering it to a helium balloon and floating it straight past the weather and commercial airspace. During the day, it will cruise in circles at 90,000 feet, soaking up solar power. At night, it will save energy by drifting down to 60,000 feet. Though current regulations require one pilot on the ground for each drone, Facebook hopes to design the Aquila so it can fly without a dedicated pilot.
To get the Internet, a laser system will connect the ground and the drone. A Facebook team has been working on the laser technology in California, and says it has achieved speeds of tens of gigabytes per second -- that's fast enough to allow hundreds of thousands of people to access broadband Internet simultaneously.
The lab works with Facebook's Internet.org, which has been criticized for only giving people access to a limited number of Internet services. But Aquila is designed to provide full broadband Internet. Facebook also won't operate the planes itself. Instead, the company plans to work with local providers or governments to actually deploy the technology, though details are still unknown.
"Building big planes and selling them is not core to our mission of connecting people," said Jay Parikh, a VP of engineering. "We are not going to take this stuff and be 'Facebook ISP.'"
Full story in CNN Money.
Those looking to get in on the robotics game have a number of choices in where they might go to learn about robo-topics like mobility, manipulation, and artificial intelligence.
A number of top-notch universities around the country (as well as some less-than-obvious names) offer robotics education programs befitting plenty of people looking to build the next great robot.
Whether you want to build a better Roomba or a new best friend, here are ten colleges that will give you the tools you need.
The Robotics and Intelligent Machines Lab at UC Berkeley has an entire department devoted to replicating animal movement for the sake of improving robotic mobility. The school's Laboratory for Automation Science and Engineering gets into more general robotics work, designing solutions for things like robot-assisted surgery and automated manufacturing. There's even an entire Computer Vision Group so that students might learn how to help robots make sense of what they "see."
It's an incredibly robust college for robotics that will likely meet your interests no matter what they are.Johns Hopkins University
Johns Hopkins University
The goal of the Johns Hopkins University's Laboratory for Computational Sensing and Robots (a not-for-profit division of the school) is straightforward: to "create knowledge and foster innovation to further the field of robotics science and engineering."
This is accomplished by exposing students to a wide variety of robotics topics. Consider its LIMBS Laboratory, which examines the principles of sensory guidance in animals and sees how they might be applied to robots. Consider its Computational Interaction and Robotics Laboratory, which examines the many hard problems encountered in human-robot interaction and robotic spatial awareness.
Check out this fact sheet on the school's robotics facilities. You can tell they're taking this stuff seriously.Colorado School Of Mines
Colorado School Of Mines
Mining is an incredibly complex pursuit, and robots can step in to do dangerous work to save lives. Someone needs to build them, and the Colorado School of Mines has its Center for Automation, Robotics, and Distributed Intelligence (CARDI) to equip people with the tools to do so.
Because it's a mining-centric school, curriculum runs the gamut from communication protocols to environmental considerations. CARDI students meet once a month over lunch to keep each other apprised of their research — one person will give a presentation on what they're up to, and the meetings frequently feature a guest to speak on topics relevant to the industry.
If you want to check out a cool project to come out of the school, we recommend "Intelligent Geosystems."Stanford University
Since its founding in 1962, Stanford's Artificial Intelligence Laboratory has been facilitating robotics education for 52 years. Students gather for weekly reading groups to dissect robotics papers and discuss the latest developments in their fields.
Its faculty's list of interests is loaded with fun robo-buzzwords: informatics, logic, machine learning, natural language processing, and so on.University of Southern California
USC's Robotics Research Lab encourages undergrads to get their hands dirty by taking directed research credits from faculty. Graduate students are invited to do research and build things in the university's robotics labs.
A dedicated page exists just to showcase videos of USC robotic creations in action. One of them is a robot for children that blows bubbles!Columbia University
Mario Tama / Getty Images
The projects described on the website for Columbia University's Robotics Group are impressive to say the least. Students have built autonomous vehicles for navigating urban environments, 3-D simulation tools to teach robots how to interact with the real world, and even a system for facilitate aspects of surgery-by-robot.
The program is headed up by Professor Peter Allen, who was named a Presidential Young Investigator by the National Science Foundation.Washington University in St. Louis
Often referred to as the "Harvard of the Midwest," WashU offers a masters of engineering in robotics. The program is built around giving students the necessary experience to find professional robotics work upon graduation, and the curriculum is built upon making sense of robotic components like sensors and actuators, then finding new ways to use them to solve problems.
Students enrolled in the program will go hands-on with mobile robotics, robot-human interaction, and brain-computer interfaces.Georgia Tech
Georgia Tech's Institute for Robotics and Intelligent Machines is led by Henrik Christensen, a noted roboticist and thinker who recently speculated that children born today will never have to drive a conventional car. He's constantly cited as a source for where robotics is heading in the future, even speculating here as to what Google will do with all its recent robotics acquisitions.
The program aims to give students an understanding of a diversity of robotics topics, such as mechanics, interactions, perceptions, and artificial intelligence and cognition.Carnegie Mellon University
The Carnegie Mellon Robotics Institute consists of 76 faculty members, 94 Ph.D. students and 132 master’s students. The university only offers a minor in robotics or a second major in robotics — students have to have already been accepted into another undergraduate major — but despite this, CMU has turned out a number of impressive robotics thinkers and entrepreneurs.
Alumni include Chris Urmson, who heads up Google’s self-driving car program. Boris Sofman, Mark Palatucci, and Hanns Tappeiner are the founders of Anki, the company that builds artificially intelligent car racing sets. Mark Maimone pilots NASA's Curiosity on Mars!
A Carnegie Mellon team led by Professor William “Red” Whittaker won the 2007 DARPA Urban Challenge robot vehicle race, which functions as something of a robot Olympics.Massachusetts Institute of Technology
MIT is nearly synonymous with developing cool, cutting-edge technology.
Its Computer Science and Artificial Intelligence Laboratory has spawned a number of robotic creations, and its long list of notable alumni includes folks like Colin Angle and Helen Greiner, co-founders of iRobot; Marc Raibert, founder of Boston Dynamics; and Matt Mason, who is now director of The Robotics Institute at Carnegie Mellon University.
There's something to be said for a school whose alumni make up the majority of the country's computer science professors.
Full text of this article in the Business Insider.