Personal Profile

Machine learning

1 2 3 5

GPUs Help Find a Massive New Reef Hiding Behind Great Barrier Reef

GPUs Help Find a Massive New Reef Hiding Behind Great Barrier Reef

GPUs Help Find a Massive New Reef Hiding Behind Great Barrier Reef

Australian scientists made a significant discovery hiding behind the world-famous Great Barrier Reef. The discovery was made using cutting-edge surveying technology, which revealed vast fields of doughnut-shaped mounds measuring up to 300 meters across and up to 10 meters deep.

“We’ve known about these geological structures in the northern Great Barrier Reef since the 1970s and 80s, but never before has the true nature of their shape, size and vast scale been revealed,” said Dr Robin Beauman of James Cook University, who helped lead the research.

The scientists from James Cook University, Queensland University of Technology, and University of Sydney used LiDAR data collected from the Australian Navy to help reveal this deeper, subtler reef. They then used CUDA and GeForce GTX 1080 GPUs to compile and visualize the huge 3D bathymetry datasets.

“Having a high-performance GPU has been critical to this ocean mapping research,” says Beauman.

North-westerly view of the Bligh Reef area off Cape York. Depths are colored red (shallow) to blue (deep), over a depth range of about 50 meters. Bathymetry data from Australian Hydrographic Service.

North-westerly view of the Bligh Reef area off Cape York. Depths are colored red (shallow) to blue (deep), over a depth range of about 50 meters. Bathymetry data from Australian Hydrographic Service.

The discovery has opened up many other new avenues of research.

“For instance, what do the 10-20 meter thick sediments of the bioherms tell us about past climate and environmental change on the Great Barrier Reef over this 10,000 year time-scale? And, what is the finer-scale pattern of modern marine life found within and around the bioherms now that we understand their true shape?”

Next up, the researchers plan to employ autonomous underwater vehicle technologies to unravel the physical, chemical and biological processes of the structures.

Teaching an AI to Detect Key Actors in Multi-person Videos

Teaching an AI to Detect Key Actors in Multi-person Videos

Teaching an AI to Detect Key Actors in Multi-person Videos

Researchers from Google and Stanford have taught their computer vision model to detect the most important person in a multi-person video scene – for example, who the shooter is in a basketball game which typically contains dozens or hundreds of people in a scene.

Using 20 Tesla K40 GPUs and the cuDNN-accelerated Tensorflow deep learning framework to train their recurrent neural network on 257 NCAA basketball games from YouTube, an attention mask selects which of the several people are most relevant to the action being performed, then tracks relevance of each object as time proceeds. The team published a paper detailing more of their work.

The distribution of attention for the model with tracking, at the beginning of “free-throw success”. The attention is concentrated at a specific defender’s position. Free-throws have a distinctive defense formation, and observing the defenders can be helpful as shown in the sample images in the top row.

The distribution of attention for the model with tracking, at the beginning of “free-throw success”. The attention is concentrated at a specific defender’s position. Free-throws have a distinctive defense formation, and observing the defenders can be helpful as shown in the sample images in the top row.

Over time the system can identify not only the most important actor, but potential important actors and the events with which they are associated – such as, the ability to understand the player going up for a layup could be important, but that the most important player is the one who then blocks the shot.

New Deep Learning Method Enhances Your Selfies

New Deep Learning Method Enhances Your Selfies

New Deep Learning Method Enhances Your Selfies

Researchers from Adobe Research and The Chinese University of Hong Kong created an algorithm that automatically separates subjects from their backgrounds so you can easily replace the background and apply filters to the subject.

Their research paper mentions there are good user-guided tools that support manually creating masks to separate subjects from the background, but the “tools are tedious and difficult to use, and remain an obstacle for casual photographers who want their portraits to look good.”

A highly accurate automatic portrait segmentation method allows many portrait processing tools to be fully automatic.

A highly accurate automatic portrait segmentation method allows many portrait processing tools to be fully automatic.

Using a TITAN X GPU and the cuDNN-accelerated Caffe deep learning framework, the researchers trained their convolutional neural network on 1,800 portrait images from Flickr. Their GPU-accelerated method was 20x faster than a CPU-only approach.

Portrait video segmentation is next on the radar for the researchers.

Facial Recognition Software Helping Caterpillar Identify Sleepy Operators

Facial Recognition Software Helping Caterpillar Identify Sleepy Operators

Facial Recognition Software Helping Caterpillar Identify Sleepy Operators

Operator fatigue can potentially be a fatal problem for Caterpillar employees driving the massive mine trucks on long, repetitive shifts throughout the night.

Caterpillar recognized this and joined forces with Seeing Machines to install their fatigue detection software in thousands of mining trucks worldwide. Using NVIDIA TITAN X and GTX 1080 GPUs along with the cuDNN-accelerated Theano, TensorFlow and Caffe deep learning frameworks, the Australian-based tech company trained their software for face tracking, gaze tracking, driver attention region estimation, facial recognition, and fatigue detection.

On-board the truck, a camera, speaker and light system are used to monitor the driver and once a potential “fatigue event” is detected, an alarm sounds in the truck and a video clip of the driver is sent to a 24-hour “sleep fatigue center” at the Caterpillar headquarters.

Facial Recognition Software Helping Caterpillar Identify Sleepy Operators

Facial Recognition Software Helping Caterpillar Identify Sleepy Operators

“This system automatically scans for the characteristics of microsleep in a driver,” Sal Angelone, a fatigue consultant at the company, told The Huffington Post, referencing the brief, involuntary pockets of unconsciousness that are highly dangerous to drivers. “But this is verified by a human working at our headquarters in Peoria.”

In the past year, Caterpillar referenced two instances – in one, a driver had three fatique events within four hours and he was contacted onsite and forced to take a nap. In another, a night shift truck driver who experienced a fatique event realized it was a sign of sleep disorder and asked his management for medical assistance.

It’s a matter of time before this technology is incorporated into every car on the road.

AI Build Smart Home Hub Smart Home Hub Brings Artificial Intelligence Into Your Home

Smart Home Hub Brings Artificial Intelligence Into Your Home

Smart Home Hub Brings Artificial Intelligence Into Your Home

A new AI-powered device will be able to replace all of your various smart home control apps, as well as being able to recognize specific people and respond to a range of emotions and gestures.

AI Build is a London-based startup focused on making your smart home more natural and intuitive. Powered by an NVIDIA Jetson TX1 and six cameras, the aiPort keeps track of your daily activities and uses this knowledge to get better at helping you. It learns your preferences, recognizes your body language, and adapts its actions with your comfort in mind.

The startup plans to launch a crowd-funding campaign later this year toand sell the device for about $1,000 each.

Autonomous Robot Starts Work as Office Manager

Autonomous Robot Starts Work as Office Manager

Autonomous Robot Starts Work as Office Manager

Programmed with the latest artificial intelligence software, Betty will spend the next two months working as an office manager at Transport Systems Catapult monitoring staff and check environmental conditions.

The robot, developed by engineers at the University of Birmingham, uses NVIDIA GPUs for various forms of computer vision — like feature extraction — and 3D image processing to create a map of the surrounding area. This allows Betty to identify desks, chairs and other objects that she must negotiate while moving around the office, and observe her colleague’s movement through activity recognition.

“For robots to work alongside humans in normal work environments it is important that they are both robust enough to operate autonomously without expert help, and that they learn to adapt to their environments to improve their performance,” said Dr Nick Hawes, from the School of Computer Science at the University of Birmingham. “Betty demonstrates both these abilities in a real working environment: we expect her to operate for two months without expert input, whilst using cutting-edge AI techniques to increase her understanding of the world around her.”

Betty is part of an EU-funded STRANDS project where robots are learning how to act intelligently and independently in real-world environments while understanding 3D space.

Open-Access Visual Search Tool for Satellite Imagery

Open-Access Visual Search Tool for Satellite Imagery

Open-Access Visual Search Tool for Satellite Imagery

A new project by Carnegie Mellon University researchers provides journalists, citizen scientists, and other researchers with the ability to quickly scan large geographical regions for specific visual features.

Simply click on a feature in the satellite imagery – a baseball diamond, cul-de-sac, tennis court – and Terrapattern will find other things that look similar in the area and pinpoint them on the map.

Using a deep learning neural network trained for five days on an NVIDIA GeForce GPU, their model will look at small squares of the landscape and, comparing those patterns to a huge database of tagged map features from OpenStreetMap, it learned to associate them with certain concepts.

Open-Access Visual Search Tool for Satellite Imagery

Open-Access Visual Search Tool for Satellite Imagery

Currently, Terrapattern is limited to Pittsburgh, San Francisco, New York City and Detroit, but access to more cities is coming soon.

Assisting Farmers with Artificial Intelligence

Assisting Farmers with Artificial Intelligence

Assisting Farmers with Artificial Intelligence

With our planet getting warmer and warmer, and carbon dioxide levels steadily creeping up, companies are using deep learning to help cope with the effects that climate change is having on their crops.   

An article on MIT Technology Review highlights PEAT, a German company using CUDA, TITAN X GPUs and the cuDNN-accelerated Caffe deep learning framework to provide farmers with a plant disease and diagnostics management tool. Farmers are able to take a picture of their affected plants, upload it to PEAT’s “Plantix” mobile app and get treatment recommendations within seconds. The database currently contains information on 52 crops worldwide and the ability to detect 160 plant diseases, pests and nutrient deficiencies with 95% accuracy.

EAT’s Plantix app provides detailed symptom descriptions to empower farmers to take autonomous decisions on their disease management.

EAT’s Plantix app provides detailed symptom descriptions to empower farmers to take autonomous decisions on their disease management.

As mobile phones are now ubiquitous throughout the developing world, this solution provides the last-mile connectivity that farmers need to deal with the impact of a changing climate.

Facebook and CUDA Accelerate Deep Learning Research

Facebook and CUDA Accelerate Deep Learning Research

Facebook and CUDA Accelerate Deep Learning Research

Last Thursday at the International Conference on Machine Learning (ICML) in New York, Facebook announced a new piece of open source software aimed at streamlining and accelerating deep learning research. The software, named Torchnet, provides developers with a consistent set of widely used  deep learning functions and utilities. Torchnet allows developers to write code in a consistent manner speeding development and promoting code re-use both between experiments and across multiple projects.

Torchnet sits atop the popular Torch deep learning framework benefits from GPU acceleration using CUDA and cuDNN.

Torchnet sits atop the popular Torch deep learning framework benefits from GPU acceleration using CUDA and cuDNN.

Torchnet sits atop the popular Torch deep learning framework benefits from GPU acceleration using CUDA and cuDNN. Further, Torchnet has built-in support for asynchronous, parallel data loading and can make full use of multiple GPUs for vastly improved iteration times. This automatic support or multi-GPU training helps Torchnet take full advantage of powerful systems like the NVIDIA DGX-1 with its eight Tesla P100 GPUs.

Facebook and CUDA Accelerate Deep Learning Research

Facebook and CUDA Accelerate Deep Learning Research

According to the Torchnet research paper, its modular design makes it easy to re-use code in a series of experiments. For instance, running the same experiments on a number of different datasets is accomplished simply by plugging in different dataloaders. And the evaluation criterion can be changed easily by plugging in a different performance meter.

Torchnet adds another powerful tool to data scientists’ toolkit and will help speed the design and training of neural networks, so they can focus on their next great advancement.

Artificial Intelligence System Predicts How You Will Look With Different Hair Styles

Artificial Intelligence System Predicts How You Will Look With Different Hair Styles

Artificial Intelligence System Predicts How You Will Look With Different Hair Styles

A new personalized search engine helps you explore what you would look like with brown hair, curly hair or in a different time period.

Upload a selfie to Dreambit and type in a term like “curly hair” or “1930 woman”, and the software’s algorithm searches through photo collections for similar images and seamlessly maps your face onto images matching your search criteria.

Ira Kemelmacher-Shlizerman, a computer vision researcher at University of Washington, developed the image recognition software using a TITAN X GPU and the cuDNN-accelerated Caffe deep learning framework to train the models and for inference. Ira presented her paper at this week’s SIGGRAPH 2016 and the search engine will be publicly available later this year.

Illustration of the system. The system gets as input a photo and a text query. The text query is used to search a web image engine. The retrieved photos are processed to compute a variety of face features and skin and hair masks, and ranked based on how well they match to the input photo. Finally, the input face is blended into the highest ranked candidates.

Illustration of the system. The system gets as input a photo and a text query. The text query is used to search a web image engine. The retrieved photos are processed to compute a variety of face features and skin and hair masks, and ranked based on how well they match to the input photo. Finally, the input face is blended into the highest ranked candidates.

Dreambit is also able to predict what a child might look like when they are forty years old or with red hair, black hair, or even a shaved head.

Artificial Intelligence System Predicts How You Will Look With Different Hair Styles

Artificial Intelligence System Predicts How You Will Look With Different Hair Styles

“It’s hard to recognize someone by just looking at a face, because we as humans are so biased towards hairstyles and hair colors,” said Kemelmacher-Shlizerman. “With missing children, people often dye their hair or change the style so age-progressing just their face isn’t enough. This is a first step in trying to imagine how a missing person’s appearance might change over time.”

1 2 3 5


Popular Pages
Popular posts
Interested
About me

My name is Sayed Ahmadreza Razian and I am a graduate of the master degree in Artificial intelligence .
Click here to CV Resume page

Related topics such as image processing, machine vision, virtual reality, machine learning, data mining, and monitoring systems are my research interests, and I intend to pursue a PhD in one of these fields.

جهت نمایش صفحه معرفی و رزومه کلیک کنید

My Scientific expertise
  • Image processing
  • Machine vision
  • Machine learning
  • Pattern recognition
  • Data mining - Big Data
  • CUDA Programming
  • Game and Virtual reality

Download Nokte as Free


Coming Soon....

Greatest hits

Imagination is more important than knowledge.

Albert Einstein

Gravitation is not responsible for people falling in love.

Albert Einstein

Waiting hurts. Forgetting hurts. But not knowing which decision to take can sometimes be the most painful.

Paulo Coelho

One day you will wake up and there won’t be any more time to do the things you’ve always wanted. Do it now.

Paulo Coelho

The fear of death is the most unjustified of all fears, for there’s no risk of accident for someone who’s dead.

Albert Einstein

You are what you believe yourself to be.

Paulo Coelho

It’s the possibility of having a dream come true that makes life interesting.

Paulo Coelho

Anyone who has never made a mistake has never tried anything new.

Albert Einstein


Site by images
Statistics
  • 2,319
  • 10,075
  • 92,535
  • 25,499
Recent News Posts