Personal Profile

Artificial Intelligence

1 2 3 7

JetPack Latest NVIDIA JetPack Developer Tools Will Double Your Deep Learning Performance

NVIDIA Helping Developers Get Started with Deep Learning

NVIDIA Helping Developers Get Started with Deep Learning

From self-driving cars to medical diagnostics, deep learning powered artificial intelligence is impacting nearly every industry.

In 2015, NVIDIA’s Deep Learning Institute delivered more than 16,000 hours of training to help data scientists and developers master this burgeoning field of AI – and the need for deep learning training is rapidly growing.

In the next four months developers can take more than 80 instructor-led workshops and hands-on labs at one of the eight GPU Technology Conferences around the world – starting this week at GTC China.

“We want to share all our knowledge about deep learning with the world so others can create amazing things with it,” said Mark Ebersole, director of the institute.

Julie Bernauer, an NVIDIA Deep Learning Institute instructor, teaches a class on deep learning on GPUs.

Julie Bernauer, an NVIDIA Deep Learning Institute instructor, teaches a class on deep learning on GPUs.

The Deep Learning Institute has joined forces with three industry-leading organizations to train data scientists and developers interested in deep learning:

  • Teaming up with Coursera to create a series of courses on how deep learning is poised to transform healthcare
  • Collaborating with Microsoft on a hands-on workshop about how to use deep learning to create smarter robots
  • Partnering with Udacity to help developers learn how to build a self-driving car

GPUs Help Find a Massive New Reef Hiding Behind Great Barrier Reef

GPUs Help Find a Massive New Reef Hiding Behind Great Barrier Reef

GPUs Help Find a Massive New Reef Hiding Behind Great Barrier Reef

Australian scientists made a significant discovery hiding behind the world-famous Great Barrier Reef. The discovery was made using cutting-edge surveying technology, which revealed vast fields of doughnut-shaped mounds measuring up to 300 meters across and up to 10 meters deep.

“We’ve known about these geological structures in the northern Great Barrier Reef since the 1970s and 80s, but never before has the true nature of their shape, size and vast scale been revealed,” said Dr Robin Beauman of James Cook University, who helped lead the research.

The scientists from James Cook University, Queensland University of Technology, and University of Sydney used LiDAR data collected from the Australian Navy to help reveal this deeper, subtler reef. They then used CUDA and GeForce GTX 1080 GPUs to compile and visualize the huge 3D bathymetry datasets.

“Having a high-performance GPU has been critical to this ocean mapping research,” says Beauman.

North-westerly view of the Bligh Reef area off Cape York. Depths are colored red (shallow) to blue (deep), over a depth range of about 50 meters. Bathymetry data from Australian Hydrographic Service.

North-westerly view of the Bligh Reef area off Cape York. Depths are colored red (shallow) to blue (deep), over a depth range of about 50 meters. Bathymetry data from Australian Hydrographic Service.

The discovery has opened up many other new avenues of research.

“For instance, what do the 10-20 meter thick sediments of the bioherms tell us about past climate and environmental change on the Great Barrier Reef over this 10,000 year time-scale? And, what is the finer-scale pattern of modern marine life found within and around the bioherms now that we understand their true shape?”

Next up, the researchers plan to employ autonomous underwater vehicle technologies to unravel the physical, chemical and biological processes of the structures.

Deep Learning to Unlock Mysteries of Parkinson’s Disease

Deep Learning to Unlock Mysteries of Parkinson’s Disease

Deep Learning to Unlock Mysteries of Parkinson’s Disease

Researchers at The Australian National University are using deep learning and NVIDIA technologies to better understand the progression of Parkinson’s disease.

Currently it is difficult to determine what type of Parkinson’s someone has or how quickly the condition will progress.
The study will be conducted over the next five years at the Canberra Hospital in Australia and will involve 120 people suffering from the disease and an equal number of non-sufferers as a controlled group.

“There are different types of Parkinson’s that can look similar at the point of onset, but they progress very differently,” says Dr Deborah Apthorp of the ANU Research School of Psychology. “We are hoping the information we collect will differentiate between these different conditions.”

Researchers Alex Smith (L) and Dr Deborah Anthrop (R) work with Parkinson’s disease sufferer Ken Hood (middle).

Researchers Alex Smith (L) and Dr Deborah Anthrop (R) work with Parkinson’s disease sufferer Ken Hood (middle).

Dr Apthorp said the research will measure brain imaging, eye tracking, visual perception and postural sway.

From the data collected during the study, the researchers will be using a GeForce GTX 1070 GPU and cuDNN to train their deep learning models to help find patterns that indicate degradation of motor function correlating with Parkinson’s.

The researchers plan to incorporate virtual reality into their work by having the sufferers’ wear head-mounted displays (HMDs), which will help them better understand how self-motion perception is altered in Parkinson’s disease, and use stimuli that mimics the visual scene during self-motion.

“Additionally, we would like to explore the use of eye tracking built into HMDs, which is a much lower cost alternative to a full research eye tracking system and reduces the amount of equipment into a highly portable and versatile single piece of equipment,” says researcher Alex Smith.

GPUs Help Cut Siri’s Error Rate by Half

GPUs Help Cut Siri’s Error Rate by Half

GPUs Help Cut Siri’s Error Rate by Half

To make Siri great, Apple employed several artificial intelligence experts three years ago to apply deep learning to their intelligent mobile smart assistant.

The team began training a neural net to replace the original Siri. “We have the biggest and baddest GPU farm cranking all the time,” says Alex Acero, who heads the speech team.

“The error rate has been cut by a factor of two in all the languages, more than a factor of two in many cases,” says Acero. “That’s mostly due to deep learning and the way we have optimized it.”

Apple Siri GPU

Besides Siri, Apple’s adoption of deep learning and neural nets are now found all over their products and services — including fraud detection on the Apple store, facial recognition and locations in your photos, and to help identify the most useful feedback from thousands of reports from beta testers.

“The typical customer is going to experience deep learning on a day-to-day level that [exemplifies] what you love about an Apple product,” says Phil Schiller, senior vice president of worldwide marketing at Apple. “The most exciting [instances] are so subtle that you don’t even think about it until the third time you see it, and then you stop and say, How is this happening?”

Teaching an AI to Detect Key Actors in Multi-person Videos

Teaching an AI to Detect Key Actors in Multi-person Videos

Teaching an AI to Detect Key Actors in Multi-person Videos

Researchers from Google and Stanford have taught their computer vision model to detect the most important person in a multi-person video scene – for example, who the shooter is in a basketball game which typically contains dozens or hundreds of people in a scene.

Using 20 Tesla K40 GPUs and the cuDNN-accelerated Tensorflow deep learning framework to train their recurrent neural network on 257 NCAA basketball games from YouTube, an attention mask selects which of the several people are most relevant to the action being performed, then tracks relevance of each object as time proceeds. The team published a paper detailing more of their work.

The distribution of attention for the model with tracking, at the beginning of “free-throw success”. The attention is concentrated at a specific defender’s position. Free-throws have a distinctive defense formation, and observing the defenders can be helpful as shown in the sample images in the top row.

The distribution of attention for the model with tracking, at the beginning of “free-throw success”. The attention is concentrated at a specific defender’s position. Free-throws have a distinctive defense formation, and observing the defenders can be helpful as shown in the sample images in the top row.

Over time the system can identify not only the most important actor, but potential important actors and the events with which they are associated – such as, the ability to understand the player going up for a layup could be important, but that the most important player is the one who then blocks the shot.

New Deep Learning Method Enhances Your Selfies

New Deep Learning Method Enhances Your Selfies

New Deep Learning Method Enhances Your Selfies

Researchers from Adobe Research and The Chinese University of Hong Kong created an algorithm that automatically separates subjects from their backgrounds so you can easily replace the background and apply filters to the subject.

Their research paper mentions there are good user-guided tools that support manually creating masks to separate subjects from the background, but the “tools are tedious and difficult to use, and remain an obstacle for casual photographers who want their portraits to look good.”

A highly accurate automatic portrait segmentation method allows many portrait processing tools to be fully automatic.

A highly accurate automatic portrait segmentation method allows many portrait processing tools to be fully automatic.

Using a TITAN X GPU and the cuDNN-accelerated Caffe deep learning framework, the researchers trained their convolutional neural network on 1,800 portrait images from Flickr. Their GPU-accelerated method was 20x faster than a CPU-only approach.

Portrait video segmentation is next on the radar for the researchers.

Advanced Real-Time Visualization for Robotic Heart Surgery

Advanced Real-Time Visualization for Robotic Heart Surgery

Advanced Real-Time Visualization for Robotic Heart Surgery

Researchers at the Harvard Biorobotics Laboratory are harnessing the power of GPUs to generate real-time volumetric renderings of patients’ hearts. The team has built a robotic system to autonomously steer commercially available cardiac catheters that can acquire ultrasound images from within the heart. They tested their system in the clinic and reported their results at the 2016 IEEE International Conference on Robotics and Automation (ICRA) in Stockholm, Sweden.

The team used an Intracardiac Echocardiography (ICE) catheter, which is equipped with an ultrasound transducer at the tip, to acquire 2D images from within a beating heart. Using NVIDIA GPUs, the team was able to reconstruct a 4D (3D + time) model of the heart from these ultrasound images.

Generating a 4D volume begins with co-registering ultrasound images that are acquired at different imaging angles but at the same phase of the cardiac cycle. The position and rotation of each image with respect to the world coordinate frame is measured using electromagnetic (EM) trackers that are attached to the catheter body. This point cloud is then discretized to lie on a 3D grid. Next, infilling is performed to fill the gaps between the slices, generating a dense volumetric representation of the heart. Finally, the volumes are displayed to the surgeon using volume rendering via raycasting, leveraging the CUDA – OpenGL interoperability. The team accelerated the volume reconstruction and rendering algorithms using two NVIDIA TITAN GPUs.

“ICE catheters are currently seldom used due to the difficulty in manual steering,” said principal investigator Prof. Robert D. Howe, Abbott and James Lawrence Professor of Engineering at Harvard University. “Our robotic system frees the clinicians of this burden, and presents them with a new method of real-time visualization that is safer and higher quality than the X-ray imaging that is used in the clinic. This is an enabling technology that can lead to new procedures that were not possible before, as well as improving the efficacy of the current ones.”

Providing real-time procedure guidance requires the use of efficient algorithms combined with a high-performance computing platform. Images are acquired at up to 60 frames per second from the ultrasound machine. Generating volumetric renderings from these images in real-time is only possible using GPUs.

Facial Recognition Software Helping Caterpillar Identify Sleepy Operators

Facial Recognition Software Helping Caterpillar Identify Sleepy Operators

Facial Recognition Software Helping Caterpillar Identify Sleepy Operators

Operator fatigue can potentially be a fatal problem for Caterpillar employees driving the massive mine trucks on long, repetitive shifts throughout the night.

Caterpillar recognized this and joined forces with Seeing Machines to install their fatigue detection software in thousands of mining trucks worldwide. Using NVIDIA TITAN X and GTX 1080 GPUs along with the cuDNN-accelerated Theano, TensorFlow and Caffe deep learning frameworks, the Australian-based tech company trained their software for face tracking, gaze tracking, driver attention region estimation, facial recognition, and fatigue detection.

On-board the truck, a camera, speaker and light system are used to monitor the driver and once a potential “fatigue event” is detected, an alarm sounds in the truck and a video clip of the driver is sent to a 24-hour “sleep fatigue center” at the Caterpillar headquarters.

Facial Recognition Software Helping Caterpillar Identify Sleepy Operators

Facial Recognition Software Helping Caterpillar Identify Sleepy Operators

“This system automatically scans for the characteristics of microsleep in a driver,” Sal Angelone, a fatigue consultant at the company, told The Huffington Post, referencing the brief, involuntary pockets of unconsciousness that are highly dangerous to drivers. “But this is verified by a human working at our headquarters in Peoria.”

In the past year, Caterpillar referenced two instances – in one, a driver had three fatique events within four hours and he was contacted onsite and forced to take a nap. In another, a night shift truck driver who experienced a fatique event realized it was a sign of sleep disorder and asked his management for medical assistance.

It’s a matter of time before this technology is incorporated into every car on the road.

AI Build Smart Home Hub Smart Home Hub Brings Artificial Intelligence Into Your Home

Smart Home Hub Brings Artificial Intelligence Into Your Home

Smart Home Hub Brings Artificial Intelligence Into Your Home

A new AI-powered device will be able to replace all of your various smart home control apps, as well as being able to recognize specific people and respond to a range of emotions and gestures.

AI Build is a London-based startup focused on making your smart home more natural and intuitive. Powered by an NVIDIA Jetson TX1 and six cameras, the aiPort keeps track of your daily activities and uses this knowledge to get better at helping you. It learns your preferences, recognizes your body language, and adapts its actions with your comfort in mind.

The startup plans to launch a crowd-funding campaign later this year toand sell the device for about $1,000 each.

Autonomous Robot Starts Work as Office Manager

Autonomous Robot Starts Work as Office Manager

Autonomous Robot Starts Work as Office Manager

Programmed with the latest artificial intelligence software, Betty will spend the next two months working as an office manager at Transport Systems Catapult monitoring staff and check environmental conditions.

The robot, developed by engineers at the University of Birmingham, uses NVIDIA GPUs for various forms of computer vision — like feature extraction — and 3D image processing to create a map of the surrounding area. This allows Betty to identify desks, chairs and other objects that she must negotiate while moving around the office, and observe her colleague’s movement through activity recognition.

“For robots to work alongside humans in normal work environments it is important that they are both robust enough to operate autonomously without expert help, and that they learn to adapt to their environments to improve their performance,” said Dr Nick Hawes, from the School of Computer Science at the University of Birmingham. “Betty demonstrates both these abilities in a real working environment: we expect her to operate for two months without expert input, whilst using cutting-edge AI techniques to increase her understanding of the world around her.”

Betty is part of an EU-funded STRANDS project where robots are learning how to act intelligently and independently in real-world environments while understanding 3D space.

1 2 3 7


Popular Pages
Popular posts
Interested
About me

My name is Sayed Ahmadreza Razian and I am a graduate of the master degree in Artificial intelligence .
Click here to CV Resume page

Related topics such as image processing, machine vision, virtual reality, machine learning, data mining, and monitoring systems are my research interests, and I intend to pursue a PhD in one of these fields.

جهت نمایش صفحه معرفی و رزومه کلیک کنید

My Scientific expertise
  • Image processing
  • Machine vision
  • Machine learning
  • Pattern recognition
  • Data mining - Big Data
  • CUDA Programming
  • Game and Virtual reality

Download Nokte as Free


Coming Soon....

Greatest hits

The fear of death is the most unjustified of all fears, for there’s no risk of accident for someone who’s dead.

Albert Einstein

You are what you believe yourself to be.

Paulo Coelho

Imagination is more important than knowledge.

Albert Einstein

Waiting hurts. Forgetting hurts. But not knowing which decision to take can sometimes be the most painful.

Paulo Coelho

Gravitation is not responsible for people falling in love.

Albert Einstein

Anyone who has never made a mistake has never tried anything new.

Albert Einstein

One day you will wake up and there won’t be any more time to do the things you’ve always wanted. Do it now.

Paulo Coelho

It’s the possibility of having a dream come true that makes life interesting.

Paulo Coelho


Site by images
Statistics
  • 2,319
  • 10,075
  • 92,535
  • 25,499
Recent News Posts