Tuesday, October 30, 2012

Need a hand? Wearable robot arms give you two

IF YOU fancy an extra pair of hands, why not take a leaf out of Dr Octopus's book? A pair of intelligent arms should make almost any job a lot easier.
The semi-autonomous arms extend out in front of the body from the hips and are strapped to a backpack-like harness that holds the control circuitry. The prototype is the handiwork of Federico Parietti and Harry Asada of the Massachusetts Institute of Technology, who suggest that one of the first uses could be to help factory workers, or those with tricky DIY tasks to perform.
"It's the first time I've seen robot arms designed to augment human abilities. It's bold and out of keeping with anything I've ever seen to attach two arms to a human," says Dave Barrett, a roboticist and mechanical engineer at Olin College in Needham, Massachusetts.
So how are the arms controlled? Parietti and Asada designed the limbs to learn and hopefully anticipate what their wearer wants. The idea is that the algorithms in charge of the limbs would first be trained to perform specific tasks.
To demonstrate what the prototype can do, a camera observed a pair of workers helping each other drill into a loose metal plate. The camera measured the distances between the tools and work surface, while feedback from sensors on the workers' bodies tracked their movements. This taught the arms where to grab and how much force to apply, so it could then assist a lone worker to both hold the drill and secure the plate.
If you think the idea of free-roaming robotic arms holding power tools sounds alarming, you aren't alone. "If a robotic arm can do useful work, it can also hurt you badly," says Barrett. "Traditionally, people are kept far away from robot arms because the arms are dangerous. The concept of strapping robotic arms onto a person is terrifying," he says.
Parietti and Asada have tried to address some of those safety fears by building the arms from softer material. Flexible components in the robotic arm, called series elastic actuators - invented in the 1990s by Gill Pratt and Matt Williamson at MIT - mean that less damage will be done if the arms do lose control.
Dennis Hong at Virginia Tech in Blacksburg says that roboticists have spent the last 30 years attempting to make robots more springy and compliant, so they can work safely alongside humans. He says he has never come across robotic arms designed to be worn on the body.
The limbs were described at the Dynamic Systems and Control Conference in Florida last week. Funded by Boeing, their first use could be to help workers build aircraft. The broader goal, say the researchers, is for the limbs and their users to work seamlessly so that "humans may perceive them as part of their own bodies"



Orginaly Posted at www.newscientist.com.

Soap bubble screen is 'the world's thinnest display'

 

         Viewers may soon be able to watch films on soap bubbles - after researchers developed a technology to project images on a screen made of soap film.
An international team produced a display that uses ultrasonic sound waves to alter film's properties and create either a flat or a 3D image.
The bubble mixture is more complex than the one sold in stores for children, but soap is still the main ingredient.
The team says the display is the world's thinnest transparent screen.
"It is common knowledge that the surface of soap bubble is a micro membrane. It allows light to pass through and displays the colour on its structure," the lead researcher, Yoichi Ochiai from the University of Tokyo, wrote in his blog.
"We developed an ultra-thin and flexible BRDF [bidirectional reflectance distribution function, a four-dimensional function defining how light is reflected at an opaque surface] screen using the mixture of two colloidal liquids."
Although traditional screens are opaque, the display created by Dr Ochiai and his colleagues Keisuke Toyoshima from the University of Tsukuba in Japan and Alexis Oyama from the Carnegie Mellon University in the US, varies in transparency and reflectance.
Using sound

Start Quote

This system contributes to open up a new path for display engineering with sharp imageries”
Yoichi Ochiai University of Tokyo
The team managed to control and exploit these properties by hitting the bubble's membrane with ultrasonic sound waves, played through speakers.
Sonic waves alter the texture of a projected image, making it look smooth or rough.
"Typical screens will show every image the same way, but images should have different visual properties," Dr Oyama told the BBC.
"For example, a butterfly's wings should be reflective and a billiard ball should be smooth, and our transparent screen can change the reflection in real time to show different textures."
To change the transparency of the projected image, the scientists modified the wave's frequency.
"Our membrane screen can be controlled using ultrasonic vibrations. Membrane can change its transparency and surface states depending on the scales of ultrasonic waves," wrote Dr Ochiai in his blog.
"The combination of the ultrasonic waves and ultra thin membranes makes more realistic, distinctive, and vivid imageries on screen.
"This system contributes to open up a new path for display engineering with sharp imageries, transparency, BRDF and flexibility."
If several bubble screens are put together, viewers get a 3D effect and even a holographic projection.
The bubble is much harder to burst than a regular soap bubble, as the mixture contains special colloids - and objects can even pass through the film without popping it.
The team said such a screen could be useful for artists to provide a realistic feel to their works, for museums - for instance, to display floating planets, and for magicians as well.
Previously, there have been attempts to develop untraditional displays - a computer screen out of water and a touchscreen out of ice.


Courtsy:BBC News http://www.bbc.co.uk

Megapixel Camera? Try Gigapixel


The camera

DURHAM, N.C. -- By synchronizing 98 tiny cameras in a single device, electrical engineers from Duke University and the University of Arizona have developed a prototype camera that can create images with unprecedented detail.
The camera’s resolution is five times better than 20/20 human vision over a 120 degree horizontal field.
The new camera has the potential to capture up to 50 gigapixels of data, which is 50,000 megapixels. By comparison, most consumer cameras are capable of taking photographs with sizes ranging from 8 to 40 megapixels. Pixels are individual “dots” of data – the higher the number of pixels, the better resolution of the image. 
The researchers believe that within five years, as the electronic components of the cameras become miniaturized and more efficient, the next generation of gigapixel cameras should be available to the general public. Details of the new camera were published online in the journal Nature. The team’s research was supported by the Defense Advanced Research Projects Agency (DARPA).
The camera was developed by a team led by David Brady, Michael J. Fitzpatrick Professor of Electric Engineering at Duke’s Pratt School of Engineering, along with scientists from the University of Arizona, the University of California – San Diego, and Distant Focus Corp.

“Each one of the microcameras captures information from a specific area of the field of view,” Brady said. “A computer processor essentially stitches all this information into a single highly detailed image. In many instances, the camera can capture images of things that photographers cannot see themselves but can then detect when the image is viewed later."
“The development of high-performance and low-cost microcamera optics and components has been the main challenge in our efforts to develop gigapixel cameras,” Brady said. “While novel multiscale lens designs are essential, the primary barrier to ubiquitous high-pixel imaging turns out to be lower power and more compact integrated circuits, not the optics.”
The software that combines the input from the microcameras was developed by an Arizona team led by Michael Gehm, assistant professor of electrical and computer engineering at the University of Arizona.
“Traditionally, one way of making better optics has been to add more glass elements, which increases complexity,” Gehm said. “This isn’t a problem just for imaging experts. Supercomputers face the same problem, with their ever more complicated processors, but at some point the complexity just saturates, and becomes cost-prohibitive."

“Our current approach, instead of making increasingly complex optics, is to come up with a massively parallel array of electronic elements,” Gehm said. “A shared objective lens gathers light and routes it to the microcameras that surround it, just like a network computer hands out pieces to the individual work stations. Each gets a different view and works on their little piece of the problem. We arrange for some overlap, so we don’t miss anything.”
The prototype camera itself is two-and-half feet square and 20 inches deep. Interestingly, only about three percent of the camera is made of the optical elements, while the rest is made of the electronics and processors needed to assemble all the information gathered. Obviously, the researchers said, this is the area where additional work to miniaturize the electronics and increase their processing ability will make the camera more practical for everyday photographers.
“The camera is so large now because of the electronic control boards and the need to add components to keep it from overheating,” Brady said, “As more efficient and compact electronics are developed, the age of hand-held gigapixel photography should follow.”
Co-authors of the Nature report with Brady and Gehm include Steve Feller, Daniel Marks, and David Kittle from Duke; Dathon Golish and Esteban Vera from Arizona; and Ron Stack from Distance Focus.


Originally posted by DUKE 

New NASA Satellites Have Android Smartphones for Brains



   NASA is aiming to launch a line of small satellites called “PhoneSats” that are cheaper to make and easier to build than those it has produced in the past. To achieve this, engineers are using unmodified Android smartphones — in one prototype, HTC’s Nexus One, and in another, Samsung’s Nexus S — to perform many of a satellite’s key functions.
As NASA explains on its website, these off-the-shelf smartphones “offer a wealth of capabilities needed for satellite systems, including fast processors, versatile operating systems, multiple miniature sensors, high-resolution cameras, GPS receivers and several radios.”
“This approach allows engineers to see what capabilities commercial technologies can provide, rather than trying to custom-design technology solutions to meet set requirements,” NASA adds.
The total cost for building one of these prototype satellites costs a mere $3,500. Three are expected to launch aboard the first flight of Orbital Sciences Corporation’s Antares rocket from a NASA flight facility at Wallops Island, Va., later this year.


Originaly posted at Mashables

NASA's Nanosail-D 'Sails' Home -- Mission Complete

After spending more than 240 days "sailing" around the Earth, NASA's NanoSail-D -- a nanosatellite that deployed NASA's first-ever solar sail in low-Earth orbit -- has successfully completed its Earth orbiting mission.

Launched to space Nov. 19, 2010 as a payload on NASA's FASTSAT, a small satellite, NanoSail-D's sail deployed on Jan. 20.

The flight phase of the mission successfully demonstrated a deorbit capability that could potentially be used to bring down decommissioned satellites and space debris by re-entering and totally burning up in the Earth's atmosphere. The team continues to analyze the orbital data to determine how future satellites can use this new technology.

A main objective of the NanoSail-D mission was to demonstrate and test the deorbiting capabilities of a large low mass high surface area sail.

"The NanoSail-D mission produced a wealth of data that will be useful in understanding how these types of passive deorbit devices react to the upper atmosphere," said Joe Casas, FASTSAT project scientist at NASA's Marshall Space Flight Center in Huntsville, Ala.











"The data collected from the mission is being evaluated, said Casas, in conjunction with data from FASTSAT science experiments intended to study and better understand the drag influences of Earth's upper atmosphere on satellite orbital re-entry."

The FASTSAT science experiments are led by NASA's Goddard Space Flight Center in Greenbelt, Md. and sponsored by the Department of Defense Space Experiments Review Board which is supported by the Department of Defense Space Test Program.

Initial assessment indicates NanoSail-D exhibited the predicted cyclical deorbit rate behavior that was only previously theorized by researchers.

"The final rate of descent depended on the nature of solar activity, the density of the atmosphere surrounding NanoSail-D and the angle of the sail to the orbital track," said Dean Alhorn, principal investigator for NanoSail-D at Marshall Space Flight Center. "It is astounding to see how the satellite reacted to the sun's solar pressure. The recent solar flares increased the drag and brought the nanosatellite back home quickly."

NanoSail-D orbited the Earth for 240 days performing well beyond expectations and burned up during reentry to Earth's atmosphere on Sept. 17.

NASA formed a partnership with Spaceweather.com to engage the amateur astronomy community to submit images of the orbiting NanoSail-D solar sail during the flight phase of the mission. NanoSail-D was a very elusive target to spot in the night sky -- at times very bright and other times difficult to see at all. Many ground observations were made over the course of the mission. The imaging challenge concluded with NanoSail-D's deorbit. Winners will be announced in early 2012.

For more information, visit:


http://www.nanosail.org/


The NanoSail-D experiment was managed at the Marshall Center, and designed and built by engineers in Huntsville. Additional design, testing, integration and execution of key spacecraft bus development and deployment support operation activities were conducted by engineers at NASA's Ames Research Center in Moffett Field, Calif. The experiment is the result of a collaborative partnership between NASA; the Department of Defense Space Test Program, and the U.S. Army Space and Missile Defense Command, the Von Braun Center for Science and Innovation, Dynetics Inc. and Mantech Nexolve Corp.

For more information about NanoSail-D visit:

http://www.nasa.gov/mission_pages/smallsats/nanosaild.html
 
 
Janet L. Anderson, 256-544-0034
Marshall Space Flight Center, Huntsville, Ala.
janet.l.anderson@nasa.gov







source:www.nasa.gov 

Google reveals new Nexus devices

 

Two new Android devices, the Nexus 4 handset and Nexus 10 tablet will go on sale on 13 November, Google has announced.
The handset, made by manufacturer LG, can also work as a games controller when wirelessly connected to a TV.
Google said that the screen of the new Nexus 10 tablet by Samsung has "the world's highest resolution display".
UK prices for the Nexus 4 start at £239 for the 8GB version, and £319 for the 16GB Nexus 10.
The Nexus 4 smartphone has a new panoramic camera tool called Photo Sphere which Google claims is "unlike any panorama you have ever seen".
"Snap shots up, down and in every direction to create stunning 360-degree immersive experiences," says the firm on its blog.

Analysis

The battle lines are now drawn for the all-important Christmas shopping season.
Apple's iPad may have dominated tablet sales to this point, but it now faces cheaper competitors with their own media ecosystems and - in the case of Samsung's new Nexus 10 - a higher-resolution screen.
Purchase decisions may come down to brand loyalty: does a shopper identify with an Amazon, Apple, Google or Microsoft logo on the back of their device? This is still a very young market and the key players are essentially still in land grab mode.
The one thing for certain is those involved feel under pressure to innovate more quickly and lower their margins - or in some cases sell the hardware at break-even prices - all of which can only be good for the public.
In addition users will have access to "Google Now" which flags up flight alerts, hotel recommendations and package tracking based on the user's location and the contents of their previous email messages.
Nexus 10 will be able to manage multiple profiles, meaning that the tablet can be shared between more than one person.
Google says that there are more than 675,000 apps that will be available via its Google Play store. Apple claims that there are over 275,000 apps available for the latest version of iPad.
Nexus 10 has a 10in (25cm) screen with a resolution of 300ppi (pixels per inch). Its advertised battery life is 500 hours on standby or nine hours of video play.
The firm also announced that it will add music to its Google Play media store in the UK, Germany, France, Italy and Spain on the same day.
New users will be able to upload 20,000 of their existing music tracks to their online accounts for free, and then stream music via the Cloud to any Android device connected to the internet.
Google also announced a refresh of its existing tablet the Nexus 7, which will now offer 3G connectivity on a top-end model. However, all versions of the Nexus 10 are wi-fi only.
None of the new hardware will be made by Motorola.
Google purchased Motorola Mobility, the mobile phone manufacturing arm of the firm, in 2011 for $12.5bn (£7.7bn) but a spokesperson said that the company was not receiving any "special treatment".
The announcement was made after Google scrapped plans for a high-profile event to mark the launches in New York.
The event was cancelled because of the approach of Hurricane Sandy.

Source:http://www.bbc.com

Drones set to share sky with domestic air traffic

 

Tests have been carried out to see whether military drones can mix safely in the air with passenger planes.
The tests involved a Predator B drone fitted with radio location systems found on domestic aircraft that help them spot and avoid other planes.
The tests will help to pave the way for greater use of drones in America's domestic airspace.
The flight tests took place off the coast of Florida in early August, but details have only just been released.
The Predator B used in the tests is a modified version of the Guardian drone typically used by the US navy. While such robot planes have been widely used in war zones and on military operations, their use over native soil has been restricted.
Politicians have given the Federal Aviation Authority (FAA) until 2015 to prepare its air traffic systems for the use of drones, both commercial and military, over US territory.
For the tests it was fitted with a location system known as Automatic Dependent Surveillance-Broadcast (ADS-B) that the FAA wants all domestic aircraft to use by 2020.
Once widely used, ADS-B will change America's air controls from a ground-based system to one that takes flight position data from satellites. By switching to this, the FAA hopes to simplify the job of managing air traffic and improve safety.
The drone completed its trials successfully, said a statement from drone maker General Atomics. The drone's location and flight path were precisely monitored throughout its flight, said the defence firm, and suggests such craft can "fly cooperatively and safely" in domestic US airspace.
More tests are planned.

Source:BBC News

Yes, Driverless Cars Know the Way to San Jose

THE “look Ma, no hands” moment came at about 60 miles an hour on Highway 101.
Brian Torcellini, Google’s driving program manager, had driven the vehicle out of the parking lot at one of the company’s research buildings and along local streets to the freeway, a main artery through Silicon Valley. But shortly after clearing the on-ramp and accelerating to the pace of traffic, he pushed a yellow button on the modified console between the front seats. A loud electronic chime came from the car’s speakers, followed by a synthesized female voice.
“Autodriving,” it announced breathlessly.
Mr. Torcellini took his hands off the steering wheel, lifted his foot from the accelerator, and the Lexus hybrid drove itself, following the curves of the freeway, speeding up to get out of another car’s blind spot, moving over slightly to stay well clear of a truck in the next lane, slowing when a car cut in front.
“We adjusted our speed to give him a little room,” said Anthony Levandowski, one of the lead engineers for Google’s self-driving-car project, who was monitoring the system on a laptop from the passenger seat. “Just like a person would.”
Since the project was first widely publicized more than two years ago, Google has been seen as being at the forefront of efforts to free humans from situations when driving is drudgery. In all, the company’s driverless cars — earlier-generation Toyota Priuses and the newer Lexuses, recognizable by their spinning, roof-mounted laser range finders — have logged about 300,000 miles on all kinds of roads. (Mr. Torcellini unofficially leads the pack, with roughly 30,000 miles behind the wheel — but not turning it.)
But the company is far from alone in its quest for a car that will drive just like a person would, or actually better. Most major automobile manufacturers are working on self-driving systems in one form or another.
Google says it does not want to make cars, but instead work with suppliers and automakers to bring its technology to the marketplace. The company sees the project as an outgrowth of its core work in software and data management, and talks about reimagining people’s relationship with their automobiles.
Self-driving cars, Mr. Levandowski said, will give people “the ability to move through space without necessarily wasting your time.”
Driving cars, he added, “is the most important thing that computers are going to do in the next 10 years.”
For the automakers, on the other hand, self-driving is more about evolution than revolution — about building incrementally upon existing features like smart cruise control and parking assist to make cars that are safer and easier to drive, although the driver is still in control. Full autonomy may be the eventual goal, but the first aim is to make cars more desirable to customers.
“We have this technology,” said Marcial Hernandez, principal engineer at the Volkswagen Group’s Electronics Research Laboratory, up the road in Belmont, Calif. “How do we turn it into a product that can be advertised to a customer, that will have some benefit to a customer?”
With all the research efforts, there is a growing consensus among transportation experts that self-driving cars are coming, sooner than later, and that the potential benefits — in crashes, deaths and injuries avoided, and in roads used more efficiently, to name a few — are enormous. Already, Florida, Nevada and California have made self-driving cars legal for testing purposes, giving each car, in effect, its own driver’s license.
Richard Wallace, director for transportation systems analysis at the Center for Automotive Research, a nonprofit group that recently released a report on self-driving cars with the consulting firm KPMG, said that probably by the end of the decade, “we would be able to have a safe, hands-free left-lane commute.” In 15 to 20 years, he said, “literally from the driveway to destination starts to become possible.”



LIDAR

Google’s autonomous vehicle project uses a spinning range-finding unit, called lidar, on top of the car. It has 64 lasers and receivers.

The device creates a detailed map of the car’s surroundings as it moves. Software adds information from other sensors and compares the map with existing maps, alerting the system to any differences.


For more details CLICK HERE


The Eyes Have It: Control of Your Tablet



A group of Danish programmers has come up with the ultimate hands-free set: tracking eye movements to interact with tablets and smart phones.
Eye tracking has already been proposed as a way to tailor advertisements by tracking how long a viewer lingers on a given part of the screen. That technology is still in the future. But it isn't hard to envision using eye tracking to move a cursor, and essentially take the place of the finger-swipe.


ANALYSIS: Clever Video Game Controls Curiosity on Mars


That's what The Eye Tribe (formerly known as Senseye) did. With some $800,000 in seed money the team came up with software that works with an infrared LED and the phone or tablet's front-facing camera. The LED lights up the eye, and the camera picks up an image that is interpreted by the software to show where the user is looking.
The Eye Tribe says on its web site that it plans to have a working version out in 2013. One thing smart phones and tablets will need to have is the LEDs, as right now they have to be tacked on for The Eye Tribe's system to work. That said, a new generation of devices could have it pre-installed.





Windows Phone 8 Late to Ball but Dressed to Kill

Windows Phone 8 is Microsoft's effort to catch up with the early movers in mobile phone technology, and it's aiming to grab attention by redefining the mobile experience. "Microsoft needs a bold thrust to have a chance of competing," said Yankee Group Vice President Carl Howe.









Microsoft on Monday officially launched Windows Phone 8 at an event in San Francisco, hot on the heels of its Windows 8 launch last week. 

Windows Phone 8 "is not just having a lot of apps to choose from," Microsoft Vice President Joe Belfiore said during his presentation. He dismissed the "static grid of icons" introduced by Apple, which has become the standard, saying people are the focus of Windows Phone 8's design.
Belfiore showed off a slew of features introduced in the new mobile operating system, adding that with Microsoft's Sky Drive cloud service, those apps can run across Windows 8 PCs, Windows Phone 8 smartphones and Xbox.





Focusing on People

Microsoft has borrowed the Google+ Circles idea and renamed it "Rooms," offering it in Windows Phone 8's People Hub.
Rooms users can create sets of people with whom they can communicate privately, and users can set up their own Rooms as required. Users can see what people in a Room are up to on their social networks, and can share notes and calendars among people in a Room.
Users can invite iPhone and Windows Phone 7 users to a Room. They can get part of the Room's experience such as the calendar, but not the full experience, Belfiore said.
Android users, however, won't be included. "Due to the inconsistent calendar experience found on Android phones, use of the calendar on Android is not recommended," Microsoft spokesperson Katie Hamachek told TechNewsWorld.







The Appification of Windows Phone 8

Apps are the lifeblood of smart devices, and 46 of the top 50 smartphone apps will be available on Windows Phone 8 devices, Belfiore announced. The Windows Phone 8 app store has about 120,000 apps, and hundreds more are being added every day.
New titles coming to Windows Phone 8 include "Temple Run," which Belfiore described as "a highly popular game on the iPhone and Android," "Angry Birds," "Fairway Solitaire," "Star Wars," and the voice-activated "UrbanSpoon."







Courtsey:http://www.technewsworld.com/

Twitter Delicious Facebook Digg Stumbleupon Favorites More

 
Design by Free WordPress Themes | Bloggerized by Lasantha - Premium Blogger Themes | coupon codes