Archive for May, 2011

May 30, 2011

>Nokia Promises Updates for Symbian till 2016


Nokia’s move of switching over to Windows Phone 7 as it’s main platform caused quite a stir. People began questioning the support for Symbian devices.But there is good news for the users of Symbian devices.Stephen Elop (CEO of Nokia) stated in a video interview today that Nokia would continue supporting and updating Symbian till at least 2016. He said that although Nokia are in the transition period of moving from Symbian to Windows, the support for Symbian will continue.
May 30, 2011

>Chameleon Magnets: Ability to Switch Magnets ‘On’ or ‘Off’ Could Revolutionize Computing


What causes a magnet to be a magnet, and how can we control a magnet’s behavior? These are the questions that University at Buffalo researcher Igor Zutic, a theoretical physicist, has been exploring over many years.

In a recent commentary in Science, Zutic and fellow UB physicist John Cerne, who studies magnetism experimentally, discuss an exciting advancement: A study by Japanese scientists showing that it is possible to turn a material’s magnetism on and off at room temperature.

A material’s magnetism is determined by a property all electrons possess: something called “spin.” Electrons can have an “up” or “down” spin, and a material is magnetic when most of its electrons possess the same spin. Individual spins are akin to tiny bar magnets, which have north and south poles.

In the Japanese study, which also appears in the current issue of Science, a team led by researchers at Tohoku University added cobalt to titanium dioxide, a nonmagnetic semiconductor, to create a new material that, like a chameleon, can transform from a paramagnet (a nonmagnetic material) to a ferromagnet (a magnetic material) at room temperature.

To achieve change, the researchers applied an electric voltage to the material, exposing the material to extra electrons. As Zutic and Cerne explain in their commentary, these additional electrons — called “carriers” — are mobile and convey information between fixed cobalt ions that causes the spins of the cobalt electrons to align in one direction.

In an interview, Zutic calls the ability to switch a magnet “on” or “off” revolutionary. He explains the promise of magnet- or spin-based computing technology — called “spintronics” — by contrasting it with conventional electronics.

Modern, electronic gadgets record and read data as a blueprint of ones and zeros that are represented, in circuits, by the presence or absence of electrons. Processing information requires moving electrons, which consumes energy and produces heat.

Spintronic gadgets, in contrast, store and process data by exploiting electrons’ “up” and “down” spins, which can stand for the ones and zeros devices read. Future energy-saving improvements in data processing could include devices that process information by “flipping” spin instead of shuttling electrons around.

In their Science commentary, Zutic and Cerne write that chameleon magnets could “help us make more versatile transistors and bring us closer to the seamless integration of memory and logic by providing smart hardware that can be dynamically reprogrammed for optimal performance of a specific task.”

“Large applied magnetic fields can enforce the spin alignment in semiconductor transistors,” they write. “With chameleon magnets, such alignment would be tunable and would require no magnetic field and could revolutionize the role ferromagnets play in technology.”

In an interview, Zutic says that applying an electric voltage to a semiconductor injected with cobalt or other magnetic impurities may be just one way of creating a chameleon magnet.

Applying heat or light to such a material could have a similar effect, freeing electrons that can then convey information about spin alignment between ions, he says.

The so-far elusive heat-based chameleon magnets were first proposed by Zutic in 2002. With his colleagues, Andre Petukhov of the South Dakota School of Mines and Technology, and Steven Erwin of the Naval Research Laboratory, he elucidated the behavior of such magnets in a 2007 paper.

The concept of nonmagnetic materials becoming magnetic as they heat up is counterintuitive, Zutic says. Scientists had long assumed that orderly, magnetic materials would lose their neat, spin alignments when heated — just as orderly, crystalline ice melts into disorderly water as temperatures rise.

The carrier electrons, however, are the key. Because heating a material introduces additional carriers that can cause nearby electrons to adopt aligned spins, heating chameleon materials — up to a certain temperature — should actually cause them to become magnetic, Zutic explains. His research on magnetism is funded by the Department of Energy, Office of Naval Research, Air Force Office of Scientific Research and the National Science Foundation.


May 27, 2011

>Computer Scientists Work Toward Improving Robots’ Ability to Plan and Perform Complex Actions, Domestically and Elsewhere


This may have been a domestic dream a half-century ago, when the fields of robotics and artificial intelligence first captured public imagination. However, it quickly became clear that even “simple” human actions are extremely difficult to replicate in robots. Now, MIT computer scientists are tackling the problem with a hierarchical, progressive algorithm that has the potential to greatly reduce the computational cost associated with performing complex actions.
Leslie Kaelbling, the Panasonic Professor of Computer Science and Engineering, and Tomás Lozano-Pérez, the School of Engineering Professor of Teaching Excellence and co-director of MIT’s Center for Robotics, outline their approach in a paper titled “Hierarchical Task and Motion Planning in the Now,” which they presented at the IEEE Conference on Robotics and Automation earlier this month in Shanghai.
Traditionally, programs that get robots to function autonomously have been split into two types: task planning and geometric motion planning. A task planner can decide that it needs to traverse the living room, but be unable to figure out a path around furniture and other obstacles. A geometric planner can figure out how to get to the phone, but not actually decide that a phone call needs to be made.
Of course, any robot that’s going to be useful around the house must have a way to integrate these two types of planning. Kaelbling and Lozano-Pérez believe that the key is to break the computationally burdensome larger goal into smaller steps, then make a detailed plan for only the first few, leaving the exact mechanisms of subsequent steps for later. “We’re introducing a hierarchy and being aggressive about breaking things up into manageable chunks,” Lozano-Pérez says. Though the idea of a hierarchy is not new, the researchers are applying an incremental breakdown to create a timeline for their “in the now” approach, in which robots follow the age-old wisdom of “one step at a time.”
The result is robots that are able to respond to environments that change over time due to external factors as well as their own actions. These robots “do the execution interleaved with the planning,” Kaelbling says.
The trick is figuring out exactly which decisions need to be made in advance, and which can — and should — be put off until later.
Sometimes, procrastination is a good thing
Kaelbling compares this approach to the intuitive strategies humans use for complex activities. She cites flying from Boston to San Francisco as an example: You need an in-depth plan for arriving at Logan Airport on time, and perhaps you have some idea of how you will check in and board the plane. But you don’t bother to plan your path through the terminal once you arrive in San Francisco, because you probably don’t have advance knowledge of what the terminal looks like — and even if you did, the locations of obstacles such as people or baggage are bound to change in the meantime. Therefore, it would be better — necessary, even — to wait for more information.
Why shouldn’t robots use the same strategy? Until now, most robotics researchers have focused on constructing complete plans, with every step from start to finish detailed in advance before execution begins. This is a way to maximize optimality — accomplishing the goal in the fewest number of movements — and to ensure that a plan is actually achievable before initiating it.
But the researchers say that while this approach may work well in theory and in simulations, once it comes time to run the program in a robot, the computational burden and real-world variability make it impractical to consider the details of every step from the get-go. “You have to introduce an approximation to get some tractability. You have to say, ‘Whichever way this works out, I’m going to be able to deal with it,'” Lozano-Pérez says.
Their approach extends not just to task planning, but also to geometric planning: Think of the computational cost associated with building a precise map of every object in a cluttered kitchen. In Kaelbling and Lozano-Pérez’s “in the now” approach, the robot could construct a rough map of the area where it will start — say, the countertop as a place for assembling ingredients. Later on in the plan — if it becomes clear that the robot will need a detailed map of the fridge’s middle shelf, to be able to reach for a jar of pickles, for example — it will refine its model as necessary, using valuable computation power to model only those areas crucial to the task at hand.
Finding the ‘sweet spot’
Kaelbling and Lozano-Pérez’s method differs from the traditional start-to-finish approach in that it has the potential to introduce suboptimalities in behavior. For example, a robot may pick up object ‘A’ to move it to a location ‘L,’ only to arrive at L and realize another object, ‘B,’ is already there. The robot will then have to drop A and move B before re-grasping A and placing it in L. Perhaps, if the robot had been able to “think ahead” far enough to check L for obstacles before picking up A, a few extra movements could have been avoided.
But, ultimately, the robot still gets the job done. And the researchers believe sacrificing some degree of behavior optimality is worth it to be able to break an extremely complex problem into doable steps. “In computer science, the trade-offs are everything,” Kaelbling says. “What we try to find is some kind of ‘sweet spot’ … where we’re trading efficiency of the actions in the world for computational efficiency.”
Citing the field’s traditional emphasis on optimal behavior, Lozano-Pérez adds, “We’re very consciously saying, ‘No, if you insist on optimality then it’s never going to be practical for real machines.'”
Stephen LaValle, a professor of computer science at the University of Illinois at Urbana-Champaign who was not affiliated with the work, says the approach is an attractive one. “Often in robotics, we have a tendency to be very analytical and engineering-oriented — to want to specify every detail in advance and make sure everything is going to work out and be accounted for,” he says. “[The researchers] take a more optimistic approach that we can figure out certain details later on in the pipeline,” and in doing so, reap a “benefit of efficiency of computational load.”
Looking to the future, the researchers plan to build in learning algorithms so robots will be better able to judge which steps are OK to put off, and which ones should be dealt with earlier in the process. To demonstrate this, Kaelbling returns to the travel example: “If you’re going to rent a car in San Francisco, maybe that’s something you do need to plan in advance,” she says, because putting it off might present a problem down the road — for instance, if you arrive to find the agencies have run out of rental cars.
Although “household helper” robots are an obvious — and useful — application for this kind of algorithm, the researchers say their approach could work in a number of situations, including supply depots, military operations and surveillance activities.
“So it’s not strictly about getting a robot to do stuff in your kitchen,” Kaelbling says. “Although that’s the example we like to think about — because everybody would be able to appreciate that.”
Courtesy ScienceDaily
May 17, 2011

>Android has a gaping network security hole


A trio of German security researchers from the University of Ulm have looked into the question of whether “it was possible to launch an impersonation attack against Google services and started our own analysis. The short answer is: Yes, it is possible, and it is quite easy to do so. Further, the attack is not limited to Google Calendar and Contacts, but is theoretically feasible with all Google services using the ClientLogin authentication protocol for access to its data APIs (application programming interface).” In other words: We are so hosed.

The problem is in the way that applications which deal with Google services request authentication tokens . These tokens are sometimes not even encrypted themselves and are good, in some cases, for up to two weeks. All a hacker has to do is grab these off an open Wi-Fi connection and you have the “key” to someone’s Gmail account, their Google calendar, or what have you.

It’s not just limited to Android apps though. The researchers also report that “this vulnerability is not limited to standard Android apps but pertains to any Android apps and also desktop applications that make use of Google services via the ClientLogin protocol over HTTP rather than HTTPS.”

Grabbing this information off the air is trivial. While it’s not as easy as using Firesheep to hi-jack a Web session, anyone with a lick of hacking talent and a network protocol analyzer such as WireShark can grab your tokens. With those in hand they can then change your Google passwords or do anything else they want with your various Google accounts.

Google, the Android smartphone and tablet makers, and the telecoms must fix this. Now.

While Android 3.x and Android 2.3.4 require the Google Calendar and Contacts apps to use the more secure HTTPS for their connections, your devices are very unlikely to currently have either one. The vendors must push out these updates sooner rather than later. In addition, Google needs to require all its ClientLogin requests to be made over secure connections. Developers should switch from ClientLogin to Oauth or some other more secure user authentication routine.

What can you do as an Android user? Well, as you wait for your vendor to update your device to Android 2.3.4, you can make a habit of not using any open Wi-Fi network.

That’s often easier to say than to do. In that case, I recommend that you either user your corporate VPN or look into setting up a Virtual Private Network (VPN) to call your own. This used to be something only a network administrator should try, but lately it’s become much easier to set up a small business, or even home, VPN server.

Fortunately, you shouldn’t need to add any software to your Android device to get it to work with your VPN. Android comes with its own built-in VPN software. This software supports most of the common VPN protocols. You’ll find it on your Android device under Wireless and Network settings/VPN Settings/Add VPN.

There are also VPN Android programs, such as 1 VPN and NeoRouter for Android, but you should try using Android’s built-in VPN setup mechanisms first. If that proves a little too difficult for you, then try one of these programs.

The real answer, of course, needs to come from Google, the hardware vendors, and the telecoms. Google’s Android developers need to improve security in their latest operating systems and patch the older versions of Android to handle the tokens securely. In turn, the vendors and telecoms need to ship the latest versions of Android, with security patches, to users as soon as possible. Until they do, it’s only a matter of time before users start losing important information through this hole to data thieves.

Courtesy Zdnet

May 17, 2011

>Hide Files Within Files for Better Data Security: Using Executable Program Files to Hide Data With Steganography


A new approach to hiding data within executable computer program files could make it almost impossible to detect hidden documents, according to a report in the International Journal of Internet Technology and Secured Transactions.
Steganography is a form of security through obscurity in which information is hidden within an unusual medium. An artist might paint a coded message into a portrait, for instance, or an author embed words in the text. A traditional paper watermark is a well-known example of steganography in action. At first glance, there would appear to be nothing unusual about the work, but a recipient aware of the presence of the hidden message would be able to extract it easily. In the computer age, steganography has become more of a science than an art.

Those intent on hiding information from prying eyes can embed data in the many different file types that are ostensibly music files (mp3), images (jpeg), video (mpeg4) or word-processing documents. Unfortunately, there is a limit to how much hidden data can be embedded in such files without it becoming apparent that something is hidden because the file size increases beyond what one might expect to see for a common music or video file, for instance. A five minute music file in mp3 format and the widespread sampling rate of 128 kilobits per second, for instance, is expected to be about 5 megabytes in size. Much bigger and suspicions would be aroused as to the true nature of the file, examination with widely available mp3-tagging software would reveal something amiss with the file’s contents. The same could be said for almost all other file types.

However, one group of files that vary enormously in size and are usually rather difficult to examine in detail because they comprise of compiled computer code are executable, or exe, files. These files tend to contain lots of what might be described as “junk data” of their own as well as internal programmer notes and identifiers, redundant sections of code and infuriatingly in some senses coding “bloat.” All of this adds up to large and essentially random file sizes for exe files. As such, it might be possible to embed and hide large amounts of data in encoded form in an exe file without disrupting the file’s ability to be executed, or run, as a program but crucially without anyone discovering that the exe file has a dual function.

Computer scientists Rajesh Kumar Tiwari of the GLNA Institute of Technology, in Mathura and G. Sahoo of the Birla Institute of Technology, in Mesra, Ranchi, India, have developed just such an algorithm for embedding hidden data in an executable file. They provide details in the International Journal of Internet Technology and Secured Transactions. The algorithm has been built into a program with graphical user interface that would take a conventional exe file and the data to be hidden as input and merge the two producing a viable exe file with a hidden payload. The technology could be used on smart phones, tablet PCs, portable media players and any other information device on which a user might wish to hide data.

Courtesy ScienceDaily
May 17, 2011

>Applying Neuroscience to Robot Vision


Scientists have attempted to replicate human attributes and abilities such as detailed vision, spatial perception and object grasping in robots.After three years of intense work, the members of EYESHOTS* have made progress in controlling the interaction between vision and movement, and as a result have designed an advanced three-dimensional visual system synchronized with robotic arms which could allow robots to observe and be aware of their surroundings and also remember the contents of those images in order to act accordingly.

For a humanoid robot to successfully interact with its environment and develop tasks without supervision, it is first necessary to refine these basic mechanisms that are still not completely resolved, says Spanish researcher Ángel Pasqual del Pobil, director of the Robotic Intelligence Laboratory of the Universitat Jaume I. His team has validated the members’ findings with a system built at the University of Castellón (Spain) consisting of a robot head with moving eyes integrated into a torso with articulated arms.

To make the computer models the team started from the knowledge of animal and human biology, for which experts specialised in neuroscience, psychology, robotics and engineering worked together. The study began by recording monkeys’ neurons engaged in visual-motor coordination, as humans share our way of perceiving the world with primates.

The first feature of our visual system that the members replicated artificially was our saccadic eye movement which is related to the dynamic change of attention. According to Dr. Pobil: “We constantly change the point of view through very fast eye movements, so fast that we are hardly aware of it. When the eyes are moving, the image is blurred and we can’t see clearly. Therefore, the brain must integrate the fragments as if it were a puzzle to give the impression of a continuous and perfect image of our surroundings.”

From the neural data, the experts developed computer models of the section of the brain that integrates images with movements of both eyes and arms. This integration is very different from that which is normally carried out by engineers and experts in robotics. The EYESHOTS consortium set out to prove that when we make a grasping movement towards an object, our brain does not previously have to calculate the coordinates.

As the Spanish researcher explains: “The truth is that the sequence is much more straightforward: our eyes look at a point and tell our arm where to go. Babies learn this progressively by connecting neurons.” Therefore, these learning mechanisms have also been simulated in EYESHOTS through a neural network that allows robots to learn how to look, how to construct a representation of the environment, how to preserve the appropriate images, and use their memory to reach for objects even if these are out of their sight at that moment.

“Our findings can be applied to any future humanoid robot capable of moving its eyes and focusing on one point. These are priority issues for the other mechanisms to work correctly,” points out the researcher.

EYESHOTS was funded by the European Union through the Seventh Framework Programme and coordinated by the University of Genoa (Italy).

* EYESHOTS (Heterogeneous 3-D Visual Perception Across Fragments)