Archive for March, 2011

March 20, 2011

>Two New SCAP Documents Help Improve Automating Computer Security Management


It’s increasingly difficult to keep up with all the vulnerabilities present in today’s highly complex operating systems and applications. Attackers constantly search for and exploit these vulnerabilities to commit identity fraud, intellectual property theft and other attacks. The National Institute of Standards and Technology (NIST) has released two updated publications that help organizations to find and manage vulnerabilities more effectively, by standardizing the way vulnerabilities are identified, prioritized and reported.
Computer security departments work behind the scenes at government agencies and other organizations to keep computers and networks secure. A valuable tool for them is security automation software that uses NIST’s Security Content Automation Protocol (SCAP). Software based on SCAP can be used to automatically check individual computers to see if they have any known vulnerabilities and if they have the appropriate security configuration settings and patches in place. Security problems can be identified quickly and accurately, allowing them to be resolved before hackers can exploit them.
The first publication, The Technical Specifications for the Security Content Automation Protocol (SCAP) Version 1.1 (NIST Special Publication (SP) 800-126 Revision 1) refines the protocol’s requirements from the SCAP 1.0 version. SCAP itself is a suite of specifications for standardizing the format and nomenclature by which security software communicates to assess software flaws, security configurations and software inventories.
SP 800-126 Rev. 1 tightens the requirements of the individual specifications in the suite to support SCAP’s functionality and ensure interoperability between SCAP tools. It also adds a new specification — the Open Checklist Interactive Language (OCIL) — that allows security experts to gather information that is not accessible by automated means. For example, OCIL could be used to ask users about their recent security awareness training or to prompt a system administrator to review security settings only available through a proprietary graphical user interface. Additionally, SCAP 1.1 calls for the use of the 5.8 version of the Open Vulnerability and Assessment Language (OVAL).
NIST and others provide publicly accessible repositories of security information and standard security configurations in SCAP formats, which can be downloaded and used by any tool that complies with the SCAP protocol. For example, the NIST-run National Vulnerability Database (NVD) provides a unique identifier for each reported software vulnerability, an analysis of its potential damage and a severity score. The NVD has grown from 6,000 listings in 2002 to about 46,000 in early 2011. It is updated daily.
The second document, Guide to Using Vulnerability Naming Schemes (Special Publication 800-51 Revision 1), provides recommendations for naming schemes used in SCAP. Before these schemes were standardized, different organizations referred to vulnerabilities in different ways, which created confusion. These naming schemes “enable better synthesis of information about software vulnerabilities and misconfigurations,” explained co-author David Waltermire, which minimizes confusion and can lead to faster security fixes. The Common Vulnerabilities and Exposures (CVE) scheme identifies software flaws; the Common Configuration Enumeration (CCE) scheme classifies configuration issues.
SP 800-51 Rev.1 provides an introduction to both naming schemes and makes recommendations for using them. It also suggests how software and service vendors should use the vulnerability names and naming schemes in their products and service offerings.
Courtesy : ScienceDaily
March 20, 2011

>Facebook Buys Feature Phone Developer "Snaptu" For Up To $70 Million


Facebook has made one of its biggest moves yet in its strategy to dominate mobile services: the social network has acquired Snaptu, a developer of apps for feature phones, for a sum believed to be up to $70 million. The acquisition is Facebook’s first outside of the U.S. and enhances the work Facebook has already done to make its services accessible to more than just smartphone users in developed markets.

March 20, 2011

>Spice Projector Phone


Spice Mobiles has launched a new Projector phone called the PopKorn with the model number M-9000. The most unique feature of this phone is the Built-in projector which lets you watch movies and documents on a large screen.

The Popkorn also boasts of Analog TV which means you can watch some free-to-air TV channels . There is also mention of a Document Viewer and Laser Pointer. The document viewer supports Microsoft Word, Excel, Powerpoint and Adobe PDF Files. Like most projector phones , this one suffers from poor battery life with a rated talk time of just 3.5 hours and we are not sure if that includes Projector usage. But the price point is certainly attractive and also Spice is airing ads during the Cricket World Cup for this handset which might help them selling a bunch of these.


Quadband GSM Support 850/900/1800/2100 MHz

Colour Display with 320×240 pixel resolution

1200 mAh battery

Talktime : 3.5 hours

Standby Time : 300 hours

3.2 MP Camera with 15 FPS Video Recording

Memory : 87 MB inbuilt and Memory card that supports cards about 16GB

SMS, MMS,  Email


MP3 Player

Video Player

FM Radio with recording

WAP, GPRS , Bluetooth


Remote Control PC via Bluetooth

March 19, 2011

>Bomb Disposal Robot Getting Ready for Front-Line Action


The University of Greenwich has joined forces with a Kent-based company in the design and manufacture of a bomb disposal robot for use by security forces, including the British Army.
The organisations have come together to create a lightweight, remote-operated vehicle, or robot, that can be controlled by a wireless device, not unlike a games console, from a distance of several hundred metres.
The innovative robot, which can climb stairs and even open doors, will be used by soldiers on bomb disposal missions in countries such as Afghanistan.
Experts from the Department of Computer & Communications Engineering, based within the university’s School of Engineering, are working on the project alongside NIC Instruments Limited of Folkestone, manufacturers of security search and bomb disposal equipment.Much lighter and more flexible than traditional bomb disposal units, the robot is easier for soldiers to carry and use when out in the field. It has cameras on board, which relay images back to the operator via the hand-held control, and includes a versatile gripper which can carry and manipulate delicate items.
The robot also includes nuclear, biological and chemical weapons sensors.
Measuring just 72cm by 35cm, the robot weighs 48 kilogrammes and can move at speeds of up to eight miles per hour.
Courtesy ScienceDaily
March 15, 2011

>WebGL – 3D experience in Web


WebGL is a cross-platform, royalty-free web standard for a low-level 3D graphics API based on OpenGL ES 2.0, exposed through the HTML5 Canvas element as Document Object Modelinterfaces. Developers familiar with OpenGL ES 2.0 will recognize WebGL as a Shader-based API using GLSL, with constructs that are semantically similar to those of the underlying OpenGL ES 2.0 API. It stays very close to the OpenGL ES 2.0 specification, with some concessions made for what developers expect out of memory-managed languages such as JavaScript.

March 15, 2011

>Nanorods Could Greatly Improve Visual Display of Information


Chemists at the University of California, Riverside have developed tiny, nanoscale-size rods of iron oxide particles in the lab that respond to an external magnetic field in a way that could dramatically improve how visual information is displayed in the future.

Previously, Yadong Yin’s lab showed that when an external magnetic field is applied to iron oxide particles in solution, the solution changes color in response to the strength and orientation of the magnetic field. Now his lab has succeeded in applying a coating of silica (silicon dioxide) to the iron oxide particles so that when they come together in solution, like linearly connected spheres, they eventually form tiny rods — or “nanorods” — that permanently retain their peapod-like structure.

When an external magnetic field is applied to the solution of nanorods, they align themselves parallel to one another like a set of tiny flashlights turned in one direction, and display a brilliant color.

“We have essentially developed tunable photonic materials whose properties can be manipulated by changing their orientation with external fields,” said Yin, an assistant professor of chemistry. “These nanorods with configurable internal periodicity represent the smallest possible photonic structures that can effectively diffract visible light. This work paves the way for fabricating magnetically responsive photonic structures with significantly reduced dimensions so that color manipulation with higher resolution can be realized.”

Applications of the technology include high-definition pattern formation, posters, pictures, energy efficient color displays, and devices like traffic signals that routinely use a set of colors. Other applications are in bio- and chemical sensing as well as biomedical labeling and imaging. Color displays that currently cannot be seen easily in sunlight — for example, a laptop screen — will be seen more clearly and brightly on devices that utilize the nanorod technology since the rods simply diffract a color from the visible light incident on them.

Study results appear online March 14 in Angewandte Chemie.

In the lab, Yin and his graduate students Yongxing Hu and Le He initially coated the magnetic iron oxide molecules with a thin layer of silica. Then they applied a magnetic field to assemble the particles into chains. Next, they coated the chains with an additional layer of silica to allow for a silica shell to form around and stabilize the chain structure.

According to the researchers, the timing of magnetic field exposure is critically important to the success of the chain formation because it allows for fine-tuning the “interparticle” spacing — the distance between any two particles — within photonic chains. They report that the chaining of the magnetic particles needs to be induced by brief exposure to external fields during the silica coating process so that the particles temporarily stay connected, allowing additional silica deposition to then fix the chains into mechanically robust rods or wires.

They also report in the research paper that the interparticle spacing within the chains in a sample can be fine-tuned by adjusting the timing of the magnetic field exposure; the length of the individual chains, which does not affect the color displayed, can be controlled by changing the duration of the magnetic field exposure.

“The photonic nanorods that we developed disperse randomly in solution in the absence of a magnetic field, but align themselves and show diffraction color instantly when an external field is applied,” Yin said. “It is the periodic arrangement of the iron oxide particles that effectively diffracts visible light and displays brilliant colors.”

He explained that all the one-dimensional photonic rods within a sample show a single color because the particles arrange themselves with uniform periodicity — that is, the interparticle spacing within all the chains is the same, regardless of the length of the individual chains. Further, the photonic chains remain separated from each other in magnetic fields due to the magnetic repulsive force that acts perpendicular to the direction of the magnetic field.

The researchers note that a simple and convenient way to change the periodicity in the rods is to use iron oxide clusters of different sizes. This, they argue, would make it possible to produce photonic rods with diffraction wavelengths across a wide range of spectrum from near ultraviolet to near infrared.

“One major advantage of the new technology is that it hardly requires any energy to change the orientation of the nanorods and achieve brightness or a color,” Yin said. “A current drawback, however, is that the interparticle spacing within the chains gets fixed once the silica coating is applied, allowing for no flexibility and only one color to be displayed.”

His lab is working now on achieving bistability for the nanorods. If the lab is successful, the nanorods would be capable of diffracting two colors, one at a time.

“This would allow the same device or pixel to display one color for a while and a different color later,” said Yin, a Cottrell Scholar.

A grant to Yin from the National Science Foundation supported the study.

Courtesy ScienceDaily

March 14, 2011

>New Switching Device Could Help Build an Ultrafast ‘Quantum Internet’


Northwestern University researchers have developed a new switching device that takes quantum communication to a new level. The device is a practical step toward creating a network that takes advantage of the mysterious and powerful world of quantum mechanics
The researchers can route quantum bits, or entangled particles of light, at very high speeds along a shared network of fiber-optic cable without losing the entanglement information embedded in the quantum bits. The switch could be used toward achieving two goals of the information technology world: a quantum Internet, where encrypted information would be completely secure, and networking superfast quantum computers.
The device would enable a common transport mechanism, such as the ubiquitous fiber-optic infrastructure, to be shared among many users of quantum information. Such a system could route a quantum bit, such as a photon, to its final destination just like an e-mail is routed across the Internet today.
The research — a demonstration of the first all-optical switch suitable for single-photon quantum communications — is published by the journal Physical Review Letters.
“My goal is to make quantum communication devices very practical,” said Prem Kumar, AT&T Professor of Information Technology in the McCormick School of Engineering and Applied Science and senior author of the paper. “We work in fiber optics so that as quantum communication matures it can easily be integrated into the existing telecommunication infrastructure.”
The bits we all know through standard, or classical, communications only exist in one of two states, either “1” or “0.” All classical information is encoded using these ones and zeros. What makes a quantum bit, or qubit, so attractive is it can be both one and zero simultaneously as well as being one or zero. Additionally, two or more qubits at different locations can be entangled — a mysterious connection that is not possible with ordinary bits.
Researchers need to build an infrastructure that can transport this “superposition and entanglement” (being one and zero simultaneously) for quantum communications and computing to succeed.
The qubit Kumar works with is the photon, a particle of light. A photonic quantum network will require switches that don’t disturb the physical characteristics (superposition and entanglement properties) of the photons being transmitted, Kumar says. He and his team built an all-optical, fiber-based switch that does just that while operating at very high speeds.
To demonstrate their switch, the researchers first produced pairs of entangled photons using another device developed by Kumar, called an Entangled Photon Source. “Entangled” means that some physical characteristic (such as polarization as used in 3-D TV) of each pair of photons emitted by this device are inextricably linked. If one photon assumes one state, its mate assumes a corresponding state; this holds even if the two photons are hundreds of kilometers apart.
The researchers used pairs of polarization-entangled photons emitted into standard telecom-grade fiber. One photon of the pair was transmitted through the all-optical switch. Using single-photon detectors, the researchers found that the quantum state of the pair of photons was not disturbed; the encoded entanglement information was intact.
“Quantum communication can achieve things that are not possible with classical communication,” said Kumar, director of Northwestern’s Center for Photonic Communication and Computing. “This switch opens new doors for many applications, including distributed quantum processing where nodes of small-scale quantum processors are connected via quantum communication links.”
Courtesy ScienceDaily
March 14, 2011

>Predicting Future Appearance: New Computer-Based Technique Ages Photographic Images of People’s Faces


A Concordia graduate student has designed a promising computer program that could serve as a new tool in missing-child investigations and matters of national security. Khoa Luu has developed a more effective computer-based technique to age photographic images of people’s faces — an advance that could help to identify missing kids and criminals on the lam.

“Research into computer-based age estimation and face aging is a relatively young field,” says Luu, a PhD candidate from Concordia’s Department of Computer Science and Software Engineering whose master’s thesis explores new and highly effective ways to estimate age and predict future appearance. His work is being supervised by professors Tien Dai Bui and Ching Suen.

Best recorded technique

“We pioneered a novel technique that combines two previous approaches, known as active appearance models (AAMs) and support vector regression (SVR),” says Luu. “This combination dramatically improves the accuracy of age-estimation. In tests, our method achieved the promising results of any published approach.”

Most face-aged images are currently rendered by forensic artists. Although these artists are trained in the anatomy and geometry of faces, they rely on art rather than science. As a

Face changes at different stages

“Our approach to computerized face aging relies on combining existing techniques,” says Luu. “The human face changes in different ways at different stages of life. During the growth and development stage, the physical structure of the face changes, becoming longer and wider; in the adult aging phase, the primary changes to the face are in soft tissue. Wrinkles and lines form, and muscles begin to lose their tone.”

All this information has to be incorporated into the computer algorithm. Since there are two periods with fundamentally different aging mechanisms, Luu had to construct two different ‘aging functions’ for this project.

To develop his face aging technique, Luu first used a combination of AAMs and SVR methods to interpret faces and “teach” the computer aging rules. Then, he input information from a database of facial characteristics of siblings and parents taken over an extended period. Using this data, the computer then predicts an individual’s facial appearance at a future period.

“Our research has applications in a whole range of areas,” says Luu. “People in national security, law enforcement, tobacco control and even in the cosmetic industry can all benefit from this technology.”

This study was supported by the Natural Sciences and Engineering Research Council of Canada and the Vietnamese Ministry of Education and Training.

Courtesy ScienceDadily

March 7, 2011

>Human Cues Used to Improve Computer User-Friendliness


Lijun Yin wants computers to understand inputs from humans that go beyond the traditional keyboard and mouse.

“Our research in computer graphics and computer vision tries to make using computers easier,” says the Binghamton University computer scientist. “Can we find a more comfortable, intuitive and intelligent way to use the computer? It should feel like you’re talking to a friend. This could also help disabled people use computers the way everyone else does.”

Yin’s team has developed ways to provide information to the computer based on where a user is looking as well as through gestures or speech. One of the basic challenges in this area is “computer vision.” That is, how can a simple webcam work more like the human eye? Can camera-captured data understand a real-world object? Can this data be used to “see” the user and “understand” what the user wants to do?

To some extent, that’s already possible. Witness one of Yin’s graduate students giving a PowerPoint presentation and using only his eyes to highlight content on various slides. When Yin demonstrated this technology for Air Force experts last year, the only hardware he brought was a webcam attached to a laptop computer.

Yin says the next step would be enabling the computer to recognize a user’s emotional state. He works with a well-established set of six basic emotions — anger, disgust, fear, joy, sadness, and surprise — and is experimenting with different ways to allow the computer to distinguish among them. Is there enough data in the way the lines around the eyes change? Could focusing on the user’s mouth provide sufficient clues? What happens if the user’s face is only partially visible, perhaps turned to one side?

“Computers only understand zeroes and ones,” Yin says. “Everything is about patterns. We want to find out how to recognize each emotion using only the most important features.”

He’s partnering with Binghamton University psychologist Peter Gerhardstein to explore ways this work could benefit children with autism. Many people with autism have difficulty interpreting others’ emotions; therapists sometimes use photographs of people to teach children how to understand when someone is happy or sad and so forth. Yin could produce not just photographs, but three-dimensional avatars that are able to display a range of emotions. Given the right pictures, Yin could even produce avatars of people from a child’s family for use in this type of therapy.

Yin and Gerhardstein’s previous collaboration led to the creation of a 3D facial expression database, which includes 100 subjects with 2,500 facial expression models. The database is available at no cost to the nonprofit research community and has become a worldwide test bed for those working on related projects in fields such as biomedicine, law enforcement and computer science.

Once Yin became interested in human-computer interaction, he naturally grew more excited about the possibilities for artificial intelligence.

“We want not only to create a virtual-person model, we want to understand a real person’s emotions and feelings,” Yin says. “We want the computer to be able to understand how you feel, too. That’s hard, even harder than my other work.”

Imagine if a computer could understand when people are in pain. Some may ask a doctor for help. But others — young children, for instance — cannot express themselves or are unable to speak for some reason. Yin wants to develop an algorithm that would enable a computer to determine when someone is in pain based just on a photograph.

Yin describes that health-care application and, almost in the next breath, points out that the same system that could identify pain might also be used to figure out when someone is lying. Perhaps a computer could offer insights like the ones provided by Tim Roth’s character, Dr. Cal Lightman, on the television show Lie to Me. The fictional character is a psychologist with an expertise in tracking deception who often partners with law-enforcement agencies.

“This technology,” Yin says, “could help us to train the computer to do facial-recognition analysis in place of experts.”

Courtesy ScienceDaily

March 5, 2011

>Mozilla Web Application Project Debuts


Mozilla on Thursday launched a developer preview of its Web application platform, a more distributed version of what Google is doing with its Chrome Web Store.
Web applications are simply Web sites with an accompanying configuration file. This file, the manifest, contains extra information necessary to install the Web app, which in some instances may make it available when there’s no network connection.
Google’s Web app specification makes a distinction between installable Web apps and hosted Web apps. The former rely on Google Chrome Extension APIs and only run in the Chrome browser. The latter are simply what we know today as Web sites and they can be accessed by typing the appropriate URL into one’s Web browser.
Mozilla’s scheme differentiates between published applications and bookmarked applications. The former rely on Open Web App APIs. The latter are just Web sites, what Google calls hosted apps.
These two approaches are not quite compatible, though efforts are being made to make them more so. Google Chrome Web apps are only available from the Chrome Web Store and can only be installed in the Chrome browser. Mozilla Open Web apps will be available from anyone who bothers to set up a Web store using Mozilla’s specifications and can be installed in any compatible browser.