Personal Profile

Neural Network

Open-Access Visual Search Tool for Satellite Imagery

Open-Access Visual Search Tool for Satellite Imagery

Open-Access Visual Search Tool for Satellite Imagery

A new project by Carnegie Mellon University researchers provides journalists, citizen scientists, and other researchers with the ability to quickly scan large geographical regions for specific visual features.

Simply click on a feature in the satellite imagery – a baseball diamond, cul-de-sac, tennis court – and Terrapattern will find other things that look similar in the area and pinpoint them on the map.

Using a deep learning neural network trained for five days on an NVIDIA GeForce GPU, their model will look at small squares of the landscape and, comparing those patterns to a huge database of tagged map features from OpenStreetMap, it learned to associate them with certain concepts.

Open-Access Visual Search Tool for Satellite Imagery

Open-Access Visual Search Tool for Satellite Imagery

Currently, Terrapattern is limited to Pittsburgh, San Francisco, New York City and Detroit, but access to more cities is coming soon.

Artificial Intelligence Helps the Blind ‘See’ Facebook

Artificial Intelligence Helps the Blind ‘See’ Facebook

Artificial Intelligence Helps the Blind ‘See’ Facebook

5 April, Facebook introduced a new feature that automatically generates text descriptions of pictures using advanced object recognition technology.

Until now, people using screen readers would only hear the name of the person who shared the photo, followed by the term “photo” when they came upon an image in News Feed. Now they will get a richer description of what’s in a photo. For instance, someone could now hear, “Image may contain three people, smiling, outdoors.”

The Facebook researchers noted that it took nearly ten months to roll the feature out publicly, as they had to train their deep learning models to recognize more than just the people in the images. For instance, since people mostly care about who is in the photo and what they are doing, but sometimes the background of the photo is what makes it interesting or significant.

While that may be intuitive to humans, it is quite challenging to teach a machine to provide as much useful information as possible while acknowledging the social context.

Their neural network models were trained on a million parameters, but they have carefully selected a set of about 100 concepts based on prominence in photos as well as the accuracy of the visual recognition system. They also avoided concepts that had very specific meanings like smiling, jewelry, cars, and boats. Currently, they are ensuring their object detection algorithm on the objects have a minimum precision rate of 0.8.

AlphaGo Wins Game One Against World Go Champion

AlphaGo Wins Game One Against World Go Champion

AlphaGo Wins Game One Against World Go Champion

Last night Google’s AI AlphaGo won the first in a five-game series against the world’s best Go player, in Seoul, South Korea. The success comes just five months after a slightly less experienced version of the same program became the first machine to defeat any Go professional by winning five games against the European champion.

This victory was far more impressive though because it came at the expense of Lee Sedol, 33, who has dominated the ancient Chinese game for a decade. The European champion, Fan Hui, is ranked only 663rd in the world.

And the machine, by all accounts, played a noticeably stronger game than it did back in October, evidence that it has learned much since then. Describing their research in the journal Nature, AlphaGo’s programmers insist that it now studies mostly on its own, tuning its deep neural networks by playing millions of games against itself.

AlphaGo Wins Game One Against World Go Champion

AlphaGo Wins Game One Against World Go Champion

The object of Go is to surround and capture territory on a 19-by-19 board; each player alternates to place a lozenge-shaped white or black piece, called a stone, on the intersections of the lines. Unlike in chess, the player of the black stones moves first.

The neural networks judge the position, and do so well enough to play a good game. But AlphaGo rises one level further by yoking its networks to a system that generates a “tree” of analysis that represents the many branching possibilities that the game might follow. Because so many moves are possible the branches quickly become an impenetrable thicket, one reason why Go programmers haven’t had the same success as chess programmers when using this “brute force” method alone. Chess has a far lower branching factor than Go.

It seems that AlphaGo’s self-improving capability largely explains its quick rise to world mastery. By contrast, chess programs’ brute-force methods required endless fine-tuning by engineers working together with chess masters. That partly explains why programs took nine years to progress from the first defeat of a grandmaster in a single game, back in 1988, to defeating then World Champion Garry Kasparov, in a six-game match, in 1997.

Even that crowning achievement—garnered with worldwide acclaim by IBM’s Deep Blue machine—came only on the second attempt. The previous year Deep Blue had managed to win only one game in the match—the first. Kasparov then exploited weaknesses he’d spotted in the computer’s game to win three and draw four subsequent games.

Sedol appears to face longer odds of staging a comeback. Unlike Deep Blue, AlphaGo can play numerous games against itself during the 24 hours until Game Two (to be streamed live tonight at 11 pm EST, 4 am GMT). The machine can study ceaselessly, unclouded by worry, ambition, fear, or hope.

Sedol, the king of the Go world, must spend much of his time sleeping—if he can. Uneasy lies the head that wears a crown.

A Deep Learning AI Chip for Your Phone

Neural networks learn to recognize objects in images and perform other artificial intelligence tasks with a very low error rate. (Just last week, a neural network built by Google’s Deep Mind lab in London beat a master of the complex Go game—one of the grand challenges of AI.) But they’re typically too complex to run on a smartphone, where, you have to admit, they’d be pretty useful. Perhaps no more. At the IEEE International Solid State Circuits Conference in San Francisco on Tuesday, MIT engineers presented a chip designed to use run sophisticated image-processing neural network software on a smartphone’s power budget.

A Deep Learning AI Chip for Your Phone

A Deep Learning AI Chip for Your Phone

The great performance of neural networks doesn’t come free. In image processing, for example, neural networks like AlexNet work so well because they put an image through a huge number of filters, first finding image edges, then identifying objects, then figuring out what’s happening in a scene. All that requires moving data around a computer again and again, which takes a lot of energy, says Vivienne Sze, an electrical engineering professor at MIT. Sze collaborated with MIT computer science professor Joel Emer, who is also a senior research scientist at GPU-maker Nvidia.

Eyeriss has 168 processing elements (PE), each with its own memory.

Eyeriss has 168 processing elements (PE), each with its own memory.

“On our chip we bring the data as close as possible to the processing units, and move the data as little as possible,” says Sze. When run on an ordinary GPU, neural networks fetch the same image data multiple times. The MIT chip has 168 processing engines, each with its own dedicated memory nearby. Nearby units can talk to each other directly, and this proximity saves power. There’s also a larger, primary storage bank farther off, of course. “We try to go there as little as possible,” says Emer. Furthering the limits on moving data, the hardware compresses the data it does send and uses statistics about the data to do fewer calculations on it than a GPU would.

All that means that when running a powerful neural network program the MIT chip, called Eyeriss, uses one-tenth the energy (0.3 watts) of a typical mobile GPU (5 – 10 W). “This is the first custom chip capable of demonstrating a full, state-of-the-art neural network,” says Sze. Eyeriss can run AlexNet, a highly accurate and computationally demanding neural network. Previous such chips could only run specific algorithms, says the MIT group; they chose to test AlexNet because it’s so demanding, and are confident it can run others of arbitrary size, they say.

Besides a use in smartphones, this kind of chip could help self-driving cars navigate and play a role in other portable electronics. At ISSCC, Hoi-Jun Yoo’s group at the Korea Advanced Institute of Science and Technology showed a pair of augmented reality glasses that use a neural network to train a gesture- and speech-based user interface to a particular user’s gestures, hand size, and dialect.

Yoo says the MIT chip may be able to run neural networks at low power once they’re trained, but he notes that the even more computationally-intensive learning process for AlexNet can’t be done on them. The MIT chip could in theory run any kind of trained neural network, whether it analyzes images, sounds, medical data, or whatever else. Yoo says it’s also important to design chips that may be more specific to a particular category of task—such as following hand gestures—and are better at learning those tasks on the fly. He says this could make for a better user experience in wearable electronics, for example. These systems need to be able to learn on the fly because the world is unpredictable and each user is different. Your computer should start to fit you like your favorite pair of jeans.

How GPUs are Revolutionizing Machine Learning

How GPUs are Revolutionizing Machine Learning

How GPUs are Revolutionizing Machine Learning

NVIDIA announced that Facebook will accelerate its next-generation computing system with the NVIDIA Tesla Accelerated Computing Platform which will enable them to drive a broad range of machine learning applications.

Facebook is the first company to train deep neural networks on the new Tesla M40 GPUs – introduced last month – this will play a large role in their new open source “Big Sur” computing platform, Facebook AI Research’s (FAIR) purpose-built system designed specifically for neural network training.

How GPUs are Revolutionizing Machine Learning---Open Rack V2 compatible 8-GPU server. Big Sur is two times faster than Facebook’s existing system and will enable the company to train twice as many neural networks which in return will help develop more accurate neural network models and new classes of advanced applications.

How GPUs are Revolutionizing Machine Learning—Open Rack V2 compatible 8-GPU server. Big Sur is two times faster than Facebook’s existing system and will enable the company to train twice as many neural networks which in return will help develop more accurate neural network models and new classes of advanced applications.

Training the sophisticated deep neural networks that power applications such as speech translation and autonomous vehicles requires a massive amount of computing performance.

With GPUs accelerating the training times from weeks to hours, it’s not surprising that nearly every leading machine learning researcher and developer is turning to the Tesla Accelerated Computing Platform and the NVIDIA Deep Learning software development kit.

A recent article on WIRED explains how GPUs have proven to be remarkably adept at deep learning and how large web companies like Facebook, Google and Baidu are shifting their computationally intensive applications to GPUs.

The artificial intelligence is on and it’s powered by GPU-accelerated machine learning.

It’s happening: ‘Pepper’ robot gains emotional intelligence

Last week we weighed in on the rise of robotica aka sexbots, noting that improvements in emotion and speech recognition would likely spur development in this field. Now a new offering from Softbank promises to be just such a game changer, equipping robots with the technology necessary to interact with humans in a social settings.  The robot is called Pepper, and it is being launched at an exorbitant cost by its makers Softbank and Aldabaran.

Pepper is being billed as the first “emotionally intelligent” robot. While it can’t wash your floors or take out the trash, it may just decompress your next domestic row with a witty remark or well-timed turn of phrase. It accomplishes such feats through the use of novel emotion recognition techniques. Emotion recognition may seem like a strange, and perhaps unnecessary, skill for a robot. However, it will be a crucial one if machines are ever able to make the leap from the factory worker to domestic caregiver.

Even in humans, emotion recognition can be devilishly difficult to achieve. Those afflicted with autism represent a portion of humanity that has been referred to as “emotion-blind” due to the difficulty they have in reading expressions.  In many ways, robots have hitherto occupied similar territory. While Softbank hasn’t revealed the exact proprietary algorithms used to achieve emotion recognition, the smart money is on some form of deep neural network.

To date, most attempts at emotion recognition have employed a branch of artificial intelligence called machine learning, in which training data, most often labeled, is fed into an algorithm that uses statistical techniques to “recognize” characteristics that set the examples apart. It’s likely that Pepper uses a variation on this, employing algorithms trained on thousands of labeled photographs or videos to learn what combination of pixels represent a smiling face versus a startled or angry one.

Pepper is also connected to the cloud, feeding data from its sensors to server clusters, where the lion’s share of processing will take place.  This should allow their emotion recognition algorithms to improve over time, as repeated use provides fresh training examples. A similar method enabled Google’s speech recognition system to overtake so many others in the field. Every time someone uses the system and corrects a misapprehended word, they provide a new training example for the AI to improve its performance. In the case of a massive search system like Google’s, training examples add up very quickly.

This may explain why Softbank is willing to go ahead with the launch of Pepper despite the financials indicating it will be a loss-making venture. If rather than optimizing profit, they are using Pepper as a means towards perfecting emotion recognition, than this may be part of a larger play to gain superior intellectual property. If that’s the case, then it probably won’t be long before we see other tech giants wading into the arena, offering new and competitive variations on Pepper.

While it may seem strange to think of our emotions as being a lucrative commodity, commanding millions of tech dollars and vied for by sleek-looking robots, such a reality could well be in store.



Popular Pages
Popular posts
Interested
About me

My name is Sayed Ahmadreza Razian and I am a graduate of the master degree in Artificial intelligence .
Click here to CV Resume page

Related topics such as image processing, machine vision, virtual reality, machine learning, data mining, and monitoring systems are my research interests, and I intend to pursue a PhD in one of these fields.

جهت نمایش صفحه معرفی و رزومه کلیک کنید

My Scientific expertise
  • Image processing
  • Machine vision
  • Machine learning
  • Pattern recognition
  • Data mining - Big Data
  • CUDA Programming
  • Game and Virtual reality

Download Nokte as Free


Coming Soon....

Greatest hits

Anyone who has never made a mistake has never tried anything new.

Albert Einstein

Waiting hurts. Forgetting hurts. But not knowing which decision to take can sometimes be the most painful.

Paulo Coelho

Imagination is more important than knowledge.

Albert Einstein

You are what you believe yourself to be.

Paulo Coelho

One day you will wake up and there won’t be any more time to do the things you’ve always wanted. Do it now.

Paulo Coelho

It’s the possibility of having a dream come true that makes life interesting.

Paulo Coelho

Gravitation is not responsible for people falling in love.

Albert Einstein

The fear of death is the most unjustified of all fears, for there’s no risk of accident for someone who’s dead.

Albert Einstein


Site by images
Statistics
  • 6,279
  • 19,112
  • 62,644
  • 18,332
Recent News Posts