Archive for January, 2011

January 23, 2011

>Ding ,Aspecting Oriented Programming with PHP

>


Ding(http://marcelog.github.com/Ding/) is a PHP framework that provides dependency injection, Aspect Oriented Programming, lightweight, simple, and quick MVC, syslog, TCP client and server with non-blocking sockets, timers, and custom error, signal, and exception handling, PAGI integration (for the Asterisk gateway interface), and PAMI integration (for Asterisk management). It is similar to Java’s Seasar and Spring.
Ding offers the following features:
  • Scalable architecture, allowing to easily adopt new features.
  • Lightweight, easy of use, and useful.
  • Loosed-couple, just use what you need and nothing more.
  • Setter Injection (For arrays, scalar values, php code, and references to other beans).
  • Constructor Injection (For the same data types as above).
  • Can define factory beans, factory classes, and factory methods to create beans.
  • Managed bean lifecycle (for singletons and prototypes).
  • Initialization and destruction methods called by the container.
  • Aspects (as in aspect oriented programming).
  • Lightweight implementation of the MVC (Model View Controller) pattern.
  • Annotations used by helpers and the container (i.e: @InitMethod, @DestroyMethod, @Controller, @ErrorHandler, etc).
  • Can cache proxies and bean definitions with Zend_Cache, Memcached, Filesystem, and APC.
  • Integration with PAMI and PAGI. So you can make asterisk (telephony) applications via agi and ami.
  • Helpers: SignalHandler, ErrorHandler, ShutdownHandler, Timer, Syslog, TCPServer, TCPClient, etc.

The name “Ding” comes from the action of using/doing dependency injection (DI), the result is something like “di’ing”.
January 23, 2011

>The Internet is Running Out of Space

>

On February 2nd around 4 a.m., the Internet will run out of its current version of IP addresses. At least that’s what one Internet Service Provider is predicting based on a rate of about one million addresses every four hours.
Hurricane Electric has launched Twitter and Facebook accounts that count down to what it has termed the “IPcalypse.”
Every device that is connected to the Internet gets a unique code called an IP address.The current system, IPv4, only supports about 4 billion individual IPv4 addresses.Fortunately, some smart folks foresaw this problem long before we did and invented IPv6, a system that invokes both letters and digits to handle 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses .Hurricane Electric’s doomsday campaign encourages other Internet service providers to transition to that system. Fortunately, the Internet Society‘s Wiki assures us that IPv4 and IPv6 can coexist during the transition despite being largely incompatible. Software and hardware developers are working on transition mechanisms, and most operating systems install support for IPv6 by default.
Thanks to Mashable
January 22, 2011

>For Robust Robots, Let Them Be Babies First

>

Or at least that’s not too far off from what University of Vermont roboticist Josh Bongard has discovered, as he reports in the January 10 online edition of the Proceedings of the National Academy of Sciences.
In a first-of-its-kind experiment, Bongard created both simulated and actual robots that, like tadpoles becoming frogs, change their body forms while learning how to walk. And, over generations, his simulated robots also evolved, spending less time in “infant” tadpole-like forms and more time in “adult” four-legged forms.
These evolving populations of robots were able to learn to walk more rapidly than ones with fixed body forms. And, in their final form, the changing robots had developed a more robust gait — better able to deal with, say, being knocked with a stick — than the ones that had learned to walk using upright legs from the beginning.
“This paper shows that body change, morphological change, actually helps us design better robots,” Bongard says. “That’s never been attempted before.”
Robots are complex
Bongard’s research, supported by the National Science Foundation, is part of a wider venture called evolutionary robotics. “We have an engineering goal,” he says “to produce robots as quickly and consistently as possible.” In this experimental case: upright four-legged robots that can move themselves to a light source without falling over.
“But we don’t know how to program robots very well,” Bongard says, because robots are complex systems. In some ways, they are too much like people for people to easily understand them.
“They have lots of moving parts. And their brains, like our brains, have lots of distributed materials: there’s neurons and there’s sensors and motors and they’re all turning on and off in parallel,” Bongard says, “and the emergent behavior from the complex system which is a robot, is some useful task like clearing up a construction site or laying pavement for a new road.” Or at least that’s the goal.
But, so far, engineers have been largely unsuccessful at creating robots that can continually perform simple, yet adaptable, behaviors in unstructured or outdoor environments.
Which is why Bongard, an assistant professor in UVM’s College of Engineering and Mathematical Sciences, and other robotics experts have turned to computer programs to design robots and develop their behaviors — rather than trying to program the robots’ behavior directly.
His new work may help.
To the light
Using a sophisticated computer simulation, Bongard unleashed a series of synthetic beasts that move about in a 3-dimensional space. “It looks like a modern video game,” he says. Each creature — or, rather, generations of the creatures — then run a software routine, called a genetic algorithm, that experiments with various motions until it develops a slither, shuffle, or walking gait — based on its body plan — that can get it to the light source without tipping over.
“The robots have 12 moving parts,” Bongard says. “They look like the simplified skeleton of a mammal: it’s got a jointed spine and then you have four sticks — the legs — sticking out.”
Some of the creatures begin flat to the ground, like tadpoles or, perhaps, snakes with legs; others have splayed legs, a bit like a lizard; and others ran the full set of simulations with upright legs, like mammals.
And why do the generations of robots that progress from slithering to wide legs and, finally, to upright legs, ultimately perform better, getting to the desired behavior faster?
“The snake and reptilian robots are, in essence, training wheels,” says Bongard, “they allow evolution to find motion patterns quicker, because those kinds of robots can’t fall over. So evolution only has to solve the movement problem, but not the balance problem, initially. Then gradually over time it’s able to tackle the balance problem after already solving the movement problem.”
Sound anything like how a human infant first learns to roll, then crawl, then cruise along the coffee table and, finally, walk?
“Yes,” says Bongard, “We’re copying nature, we’re copying evolution, we’re copying neural science when we’re building artificial brains into these robots.” But the key point is that his robots don’t only evolve their artificial brain — the neural network controller — but rather do so in continuous interaction with a changing body plan. A tadpole can’t kick its legs, because it doesn’t have any yet; it’s learning some things legless and others with legs.
And this may help to explain the most surprising — and useful — finding in Bongard’s study: the changing robots were not only faster in getting to the final goal, but afterward were more able to deal with new kinds of challenges that they hadn’t before faced, like efforts to tip them over.
Bongard is not exactly sure why this is, but he thinks it’s because controllers that evolved in the robots whose bodies changed over generations learned to maintain the desired behavior over a wider range of sensor-motor arrangements than controllers evolved in robots with fixed body plans. It seem that learning to walk while flat, squat, and then upright, gave the evolving robots resilience to stay upright when faced with new disruptions. Perhaps what a tadpole learns before it has legs makes it better able to use its legs once they grow.
“Realizing adaptive behavior in machines has to date focused on dynamic controllers, but static morphologies,” Bongard writes in his PNAS paper “This is an inheritance from traditional artificial intelligence in which computer programs were developed that had no body with which to affect, and be affected by, the world.”
“One thing that has been left out all this time is the obvious fact that in nature it’s not that the animal’s body stays fixed and its brain gets better over time,” he says, “in natural evolution animals bodies and brains are evolving together all the time.” A human infant, even if she knew how, couldn’t walk: her bones and joints aren’t up to the task until she starts to experience stress on the foot and ankle.
That hasn’t been done in robotics for an obvious reason: “it’s very hard to change a robot’s body,” Bongard says, “it’s much easier to change the programming inside its head.”
Lego proof
Still, Bongard gave it a try. After running 5000 simulations, each taking 30 hours on the parallel processors in UVM’s Vermont Advanced Computing Center — “it would have taken 50 or 100 years on a single machine,” Bongard says — he took the task into the real world.
“We built a relatively simple robot, out of a couple of Lego Mindstorm kits, to demonstrate that you actually could do it,” he says. This physical robot is four-legged, like in the simulation, but the Lego creature wears a brace on its front and back legs. “The brace gradually tilts the robot,” as the controller searches for successful movement patterns, Bongard says, “so that the legs go from horizontal to vertical, from reptile to quadruped.
“While the brace is bending the legs, the controller is causing the robot to move around, so it’s able to move its legs, and bend its spine,” he says, “it’s squirming around like a reptile flat on the ground and then it gradually stands up until, at the end of this movement pattern, it’s walking like a coyote.”
“It’s a very simple prototype,” he says, “but it works; it’s a proof of concept.”
Courtesy ScienceDaily
January 22, 2011

>Nokia Lottery scammers out there, Beware !

>

You might be aware of such hoax email, sms like ..  You won a huge amount of cash from an ongoing Nokia promo, IGNORE it,.  Good folks at Nokia Conversation has posted a guide on how this actually work and how careful one has to be when they come across such situations. They have also recieved numerous comments on the same issue, all that they want to inform to the world is that The Nokia Lottery isn’t real.
How the Nokia Lottery scam works
There are a couple of methods these scammers use, in the hope of stealing your money.
You receive an email claiming you have been chosen at random, to receive a prize, usually a large figure of money. In this case, £350,000 is the prize. However, to receive your winnings you must send them some money as an admin charge. £650 in this same case. They hope you’ll be so dazzled by the large sum of money they claim you’ve won, that you’ll send the admin fee. It’s at this point you’ll never hear from them again and your money will be lost.
A second method commonly used is contacting you by SMS. These text messages will tell you you’ve won a heap of money, like in this case, but you must first phone a telephone number or email back to hand over your bank details. Do not do this. Never give your bank details to a stranger, especially if all you know about them is their mobile phone number.
Whatever method they use, they will be asking for the same thing. Either your bank account details, or some money in another form and the messages they send always look official, but they’re not. They’re fake.
What to do if you receive a Nokia Lottery scam SMS or email
Don’t respond to the messages, doing so will only encourage them to keep contacting you.Report the scam to an agency that can help. Action Fraud is a UK-based firm that deals with these matters here in the UK, and there will be other similar companies all over the world too.If you haven’t entered any competition, or if the prize looks too good to be true, then it almost certainly will be.
Courtest fonearena
January 22, 2011

>Fruit Fly Nervous System Provides New Solution to Fundamental Computer Network Problem

>

The fruit fly has evolved a method for arranging the tiny, hair-like structures it uses to feel and hear the world that’s so efficient a team of scientists in Israel and at Carnegie Mellon University says it could be used to more effectively deploy wireless sensor networks and other distributed computing applications.
With a minimum of communication and without advance knowledge of how they are connected with each other, the cells in the fly’s developing nervous system manage to organize themselves so that a small number of cells serve as leaders that provide direct connections with every other nerve cell, said author Ziv Bar-Joseph, associate professor of machine learning at Carnegie Mellon University.
The result, the researchers report in the Jan. 14 edition of the journal Science, is the same sort of scheme used to manage the distributed computer networks that perform such everyday tasks as searching the Web or controlling an airplane in flight. But the method used by the fly’s nervous system to organize itself is much simpler and more robust than anything humans have concocted.
“It is such a simple and intuitive solution, I can’t believe we did not think of this 25 years ago,” said co-author Noga Alon, a mathematician and computer scientist at Tel Aviv University and the Institute for Advanced Study in Princeton, N.J.
Bar-Joseph, Alon and their co-authors — Yehuda Afek of Tel Aviv University and Naama Barkai, Eran Hornstein and Omer Barad of the Weizmann Institute of Science in Rehovot, Israel — used the insights gained from fruit flies to design a new distributed computing algorithm. They found it has qualities that make it particularly well suited for networks in which the number and position of the nodes is not completely certain. These include wireless sensor networks, such as environmental monitoring, where sensors are dispersed in a lake or waterway, or systems for controlling swarms of robots.
“Computational and mathematical models have long been used by scientists to analyze biological systems,” said Bar-Joseph, a member of the Lane Center for Computational Biology in Carnegie Mellon’s School of Computer Science. “Here we’ve reversed the strategy, studying a biological system to solve a long-standing computer science problem.”
Today’s large-scale computer systems and the nervous system of a fly both take a distributive approach to performing tasks. Though the thousands or even millions of processors in a computing system and the millions of cells in a fly’s nervous system must work together to complete a task, none of the elements need to have complete knowledge of what’s going on, and the systems must function despite failures by individual elements.
In the computing world, one step toward creating this distributive system is to find a small set of processors that can be used to rapidly communicate with the rest of the processors in the network — what graph theorists call a maximal independent set (MIS). Every processor in such a network is either a leader (a member of the MIS) or is connected to a leader, but the leaders are not interconnected.
A similar arrangement occurs in the fruit fly, which uses tiny bristles to sense the outside world. Each bristle develops from a nerve cell, called a sensory organ precursor (SOP), which connects to adjoining nerve cells, but does not connect with other SOPs.
For three decades, computer scientists have puzzled over how processors in a network can best elect an MIS. The common solutions use a probabilistic method — similar to rolling dice — in which some processors identify themselves as leaders, based in part on how many connections they have with other processors. Processors connected to these self-selected leaders take themselves out of the running and, in subsequent rounds, additional processors self-select themselves and the processors connected to them take themselves out of the running. At each round, the chances of any processor joining the MIS (becoming a leader) increases as a function of the number of its connections.
This selection process is rapid, Bar-Joseph said, but it entails lots of complicated messages being sent back and forth across the network, and it requires that all of the processors know in advance how they are connected in the network. That can be a problem for applications such as wireless sensor networks, where sensors might be distributed randomly and all might not be within communication range of each other.
During the larval and pupal stages of a fly’s development, the nervous system also uses a probabilistic method to select the cells that will become SOPs. In the fly, however, the cells have no information about how they are connected to each other. As various cells self-select themselves as SOPs, they send out chemical signals to neighboring cells that inhibit those cells from also becoming SOPs. This process continues for three hours, until all of the cells are either SOPs or are neighbors to an SOP, and the fly emerges from the pupal stage.
In the fly, Bar-Joseph noted, the probability that any cell will self-select increases not as a function of connections, as in the typical MIS algorithm for computer networks, but as a function of time. The method does not require advance knowledge of how the cells are arranged. The communication between cells is as simple as can be.
The researchers created a computer algorithm based on the fly’s approach and proved that it provides a fast solution to the MIS problem. “The run time was slightly greater than current approaches, but the biological approach is efficient and more robust because it doesn’t require so many assumptions,” Bar-Joseph said. “This makes the solution applicable to many more applications.”
This research was supported in part by grants from the National Institutes of Health and the National Science Foundation.
Courtesy : Science Daily
January 22, 2011

>Facebook raises $1 billion through Goldman Sachs

>

Facebook announced Friday that it had raised $1.5 billion in new financing led by Goldman Sachs.
The investments include $500 million from Goldman Sachs and the Russian investment firm Digital Sky Technologies, as well as $1 billion from wealthy Goldman clients based overseas.
The round of financing values the social networking giant at $50 billion — more than the market values of Yahoo and eBay. According to SharesPost, a private marketplace, the private shares of Facebook are trading at an implied valuation of $76 billion.
Facebook said in a statement that while it had the opportunity to accept as much as $1.5 billion from Goldman’s foreign clients — after American individuals were shut out of the offering — it chose to limit the amount.
January 22, 2011

>Motorola Defy coming to India next week?

>

Motorola Defy is an Android powered rugged smartphone, which is a beautiful and good device as compared to other rugged phone out there. We also posted a little preview of this phone a few weeks ago, where we told you that the phone is nice but the Motorola MotoBlur is not heart winning but for some people, MotoBlur is a true  winner.
Now rumors are popping out that Motorola is planning to launch the Motorola Defy in India on January 24th. AndroidOS.in is the source of this rumor, and they are saying that the device will go for 20,000 INR ($438). The device is currently running Android 2.1, and will be upgraded to Android 2.2 soon according to Motorola.
Thanks to fonarena
January 22, 2011

>Nokia C7 controls a BMW

>

Nokia C7 turns out into a remote to control a BMW . An Jiaxuan with his friend developed the app which runs in the Nokia C7 and with the app they remotely control a BMW car.It started with the controlling of toy cars and ended up in controlling a BMW.  This project was made possible after 20 days of coding.
Check out the following video!

January 22, 2011

>Motorola’s First Android Tablet to Retail for $800

>

According to information leaked from an anonymous Verizon employee, Motorola’s Xoom, a tablet running the long-anticipated Android Honeycomb, will sell for $800.
First, it, along with the Droid Bionic and a lineup of other smartphones, is one of the first Verizon 4G LTE devices.Second, the tablet is one of the first that will be running Honeycomb (Android 3.0), the tablet-specific fork of the Android mobile operating system. While we’ve seen Android tablets running version 1.6 and even 2.2 (Froyo), this will be the first instance of an intentional and elegant Android approach to the tablet form factor.
In addition to the new OS, Xoom features a 1080p screen resolution, front- and rear-facing cameras (2MP and 5MP, respectively), an HDMI output, and an accelerometer.
Motorola also says the device “delivers console-like gaming performance on its 1280×800 display, and features a built-in gyroscope, barometer, e-compass, accelerometer and adaptive lighting for new types of applications. It also features Google Maps 5.0 with 3D interaction and delivers access to over 3 million Google eBooks and thousands of apps from Android Market.”
The first 3G and Wi-Fi-enabled Xoom units should be available around the end of Q1 2011, and according to new reports from Android Central, the minimum advertised price for the units will start at $800 — a hefty price tag compared to other gadget options currently on the market.
January 16, 2011

>World’s Strangest Glasses-Free 3D Solution

>Filmmaker and visual artist Francois Vogel has developed the world’s strangest glasses-free 3D solution. In technical speak, this “system works only on 120Hz monitor displays [and] simulates 3D Active Shutter Glasses.” Video after the break.