Personal Profile

Mobile

Get Ready for the HTC Vive with NVIDIA GPUs

HTC today announced the consumer version of its Vive virtual reality headset. HTC Vive brings “room-scale” VR that enables new ways for gamers and professionals to walk around and interact with their virtual environments.

Get Ready for the HTC Vive with NVIDIA GPUs

Get Ready for the HTC Vive with NVIDIA GPUs

It will also bring a host of exciting new content that will be available from the Steam platform — such as Tilt Brush (which lets you paint on a virtual canvas), Everest VR (which lets you grapple your way up the world’s tallest peak) and Job Simulator (a fun take on modern work life).

HTC published its recommended specs to power its new headset. It also highlighted several GeForce GTX-powered PCs optimized for Vive from Alienware, MSI and HP.

To get the most out of your experience, you’ll need a PC with a GeForce GTX 970 or higher, a GeForce GTX 980-based notebook or a workstation with a Quadro M5000 or higher.

All these VR-ready GPUs support GameWorks VR and DesignWorks VR technologies, which help reduce latency and improve performance for VR games and apps. In fact, HTC Vive takes advantage of NVIDIA Direct Mode, a GameWorks VR feature that provides plug-and-play compatibility between NVIDIA GPUs and the headset.

If you’re looking for a VR-ready PC, or just need to update your graphics card, head over to GeForce.com and check out a wide range of GeForce GTX VR Ready graphics cards, PCs and notebooks.

Stay tuned for Feb. 29 when HTC Vive goes up for pre-order.

A Deep Learning AI Chip for Your Phone

Neural networks learn to recognize objects in images and perform other artificial intelligence tasks with a very low error rate. (Just last week, a neural network built by Google’s Deep Mind lab in London beat a master of the complex Go game—one of the grand challenges of AI.) But they’re typically too complex to run on a smartphone, where, you have to admit, they’d be pretty useful. Perhaps no more. At the IEEE International Solid State Circuits Conference in San Francisco on Tuesday, MIT engineers presented a chip designed to use run sophisticated image-processing neural network software on a smartphone’s power budget.

A Deep Learning AI Chip for Your Phone

A Deep Learning AI Chip for Your Phone

The great performance of neural networks doesn’t come free. In image processing, for example, neural networks like AlexNet work so well because they put an image through a huge number of filters, first finding image edges, then identifying objects, then figuring out what’s happening in a scene. All that requires moving data around a computer again and again, which takes a lot of energy, says Vivienne Sze, an electrical engineering professor at MIT. Sze collaborated with MIT computer science professor Joel Emer, who is also a senior research scientist at GPU-maker Nvidia.

Eyeriss has 168 processing elements (PE), each with its own memory.

Eyeriss has 168 processing elements (PE), each with its own memory.

“On our chip we bring the data as close as possible to the processing units, and move the data as little as possible,” says Sze. When run on an ordinary GPU, neural networks fetch the same image data multiple times. The MIT chip has 168 processing engines, each with its own dedicated memory nearby. Nearby units can talk to each other directly, and this proximity saves power. There’s also a larger, primary storage bank farther off, of course. “We try to go there as little as possible,” says Emer. Furthering the limits on moving data, the hardware compresses the data it does send and uses statistics about the data to do fewer calculations on it than a GPU would.

All that means that when running a powerful neural network program the MIT chip, called Eyeriss, uses one-tenth the energy (0.3 watts) of a typical mobile GPU (5 – 10 W). “This is the first custom chip capable of demonstrating a full, state-of-the-art neural network,” says Sze. Eyeriss can run AlexNet, a highly accurate and computationally demanding neural network. Previous such chips could only run specific algorithms, says the MIT group; they chose to test AlexNet because it’s so demanding, and are confident it can run others of arbitrary size, they say.

Besides a use in smartphones, this kind of chip could help self-driving cars navigate and play a role in other portable electronics. At ISSCC, Hoi-Jun Yoo’s group at the Korea Advanced Institute of Science and Technology showed a pair of augmented reality glasses that use a neural network to train a gesture- and speech-based user interface to a particular user’s gestures, hand size, and dialect.

Yoo says the MIT chip may be able to run neural networks at low power once they’re trained, but he notes that the even more computationally-intensive learning process for AlexNet can’t be done on them. The MIT chip could in theory run any kind of trained neural network, whether it analyzes images, sounds, medical data, or whatever else. Yoo says it’s also important to design chips that may be more specific to a particular category of task—such as following hand gestures—and are better at learning those tasks on the fly. He says this could make for a better user experience in wearable electronics, for example. These systems need to be able to learn on the fly because the world is unpredictable and each user is different. Your computer should start to fit you like your favorite pair of jeans.

Epic Games Unveils ProtoStar at Samsung Galaxy Unpacked

Epic Games has revealed ProtoStar, a real-time 3D experience built with Unreal Engine 4 technology at Mobile World Congress 2016. Demonstrated on the newly unveiled Samsung Galaxy S7, ProtoStar is the first application using Vulkan API to be shown at Samsung Galaxy Unpacked 2016.

Epic Games Unveils ProtoStar at Samsung Galaxy Unpacked

Epic Games Unveils ProtoStar at Samsung Galaxy Unpacked

“The new industry-standard Vulkan API brings key elements of high-end console graphics technology to mobile devices, and Samsung is leading the way with the amazing new Galaxy S7,” said Tim Sweeney, CEO of Epic Games. “As the first engine supporting Vulkan, Unreal Engine 4 provides a solid foundation for developers joining in the mobile graphics revolution.”

Unreal Engine 4’s implementation of Vulkan API enables developers to create visually stunning, cross-platform 3D content that supports more draw calls, and more dynamic objects onscreen, with faster performance than ever before.

Epic Games Unveils ProtoStar at Samsung Galaxy Unpacked

Epic Games Unveils ProtoStar at Samsung Galaxy Unpacked

ProtoStar introduces a slew of new Unreal Engine 4 rendering achievements on mobile, including:

  • Dynamic planar reflections (high-quality reflections for dynamic objects)
  • Full GPU particle support on mobile, including vector fields
  • Temporal anti-aliasing (TAA)
  • High-quality ASTC texture compression
  • Full scene dynamic cascaded shadows
  • Chromatic aberration
  • Mobile dynamic light refraction
  • Filmic tonemapping curve
  • Improved mobile static reflections
  • High-quality mobile depth of field
  • Vulkan API support with thousands of dynamic objects onscreen

Epic Games Unveils ProtoStar at Samsung Galaxy Unpacked

Epic Games Unveils ProtoStar at Samsung Galaxy Unpacked

In addition, Vulkan in Unreal Engine 4 gives developers more control on mobile tile-based graphics processors, allowing for very thin and fast graphics drivers, with minimal overhead. Using Vulkan’s separate debug layer, developers can more thoroughly and easily inspect code and fix issues.

5 Startups Playing Big, and Betting on the Future, with Deep Learning

Real Life Analytics: Accurate, Automatic Ads

To power targeted in-store ads, the U.K.’s Real Life Analytics offers retailers a webcam and a dongle to attach to a digital display. Seems simple. But the deep learning software running inside that dongle does astonishing things.

Approach the display’s webcam, and a deep learning neural network figures out your age and gender. In milliseconds, it flips on an ad targeting your demographic. Meanwhile, the deep learning network — designed with DIGITS deep learning training software using the cuDNN-accelerated Caffe framework — analyzes your real-time engagement. Running on our Tegra chip, of course.

 

ZZ Photo
ZZ Photo’s “DeepPet” algorithm is up to five times more accurate than traditional object recognition.

ZZ Photo: Putting Pets on the Pedestal

ZZ Photo, a startup based in Ukraine, can help you sort out the thousands of images you’ve stashed in your PCs. Using CUDA-enabled GPUs to speed up computations in their neural networks, ZZ Photo can detect images on PCs. It then sorts and arranges the photos, tagging them by face, scene or pet.

That’s right. ZZ Photo’s “DeepPet” algorithm can tell the difference between your labradoodle and chiweeni. It’s up to five times more accurate than traditional object recognition algorithms in identifying cats and dogs.

MicroBlink: Math-Solving App Heads to No. 1

MicroBlink PhotoMath app
MicroBlink’s PhotoMatch app reads and solves mathematical problems in real time.

With students recently returning to school, MicroBlink’s PhotoMath app headed to the top of the class as the No. 1 iPhone free U.S. download in early September. The app reads and solves mathematical problems in real time. Just take a picture of the problem with your smart phone or tablet.

MicroBlink, founded in Zagreb, Croatia, uses NVIDIA GPUs to train PhotoMath’s deep learning algorithms. The app can now handle fractions, inequalities, quadratic equations and more. It makes math simple by showing users how to solve math problems step by step. And parents rave about how the tool checks their kids’ homework.

HyperVerge: Innovative Image Identification

Forget scrolling past a series of selfies to find a photo of your driver’s license. HyperVerge, a startup out of India, has developed Silver. The mobile image recognition app uses GPUs for data processing and training their application engines.

The app sorts photos on mobile devices. It categorizes photos as faces, screenshots, and memes. It even identifies documents — a category that includes handwritten notes, ID scans and checks. HyperVerge has also developed tools to delete poor quality and duplicate photos.

ViSenze: Search Without Keywords

ViSenze
ViSenze’s image recognition technology powers visual search with uncanny accuracy.

If a picture is worth a thousand words, why are we doing so much typing into search engines? ViSenze, a Singapore-based startup, lets you search e-commerce platforms visually. Drop an image into its deep learning-powered platform and it quickly pulls up scores of similar images, without relying on keywords or manual image tagging.

In fact, its image recognition technology automatically does the tagging by attributes such as shape, color and pattern. So, for example, if you’ve found a dress but want to see similar sleeveless versions from your favorite e-tailers, or if you like a handbag but want to see variations in leather or with a tapered shape, ViSenze zeroes in with amazing accuracy and speed.

Bringing Massive Computing Power to the Masses

These are just a few of the startups using our GPUs to embrace the deep learning revolution. It’s no surprise. GPU acceleration is ideal for the demands of deep learning algorithms. These algorithms power applications in fields ranging from medical imaging analysis to self-driving cars.

Training computers on these algorithms requires they teach themselves. To do that, they process enormous amounts of data. Our DIGITS deep learning software and cuDNN programming library speed things along. For off-the-shelf capability, there’s the DIGITS DevBox. Combining four NVIDIA GeForce GTX TITAN X GPUs, DIGITS software and deep learning tools, it’s the world’s fastest deskside deep learning machine.

With tools like these, a startup can be as equipped to tackle deep learning problems as tech leaders with huge server rooms.

There’s no better place for GPU-using startups to highlight their groundbreaking work than the annual Emerging Companies Summit, where we’ll award $100,000 to the most promising venture. The daylong event, part of our annual GPU Technology Conference, will take place on April 6, 2016.



Popular Pages
Popular posts
Interested
About me

My name is Sayed Ahmadreza Razian and I am a graduate of the master degree in Artificial intelligence .
Click here to CV Resume page

Related topics such as image processing, machine vision, virtual reality, machine learning, data mining, and monitoring systems are my research interests, and I intend to pursue a PhD in one of these fields.

جهت نمایش صفحه معرفی و رزومه کلیک کنید

My Scientific expertise
  • Image processing
  • Machine vision
  • Machine learning
  • Pattern recognition
  • Data mining - Big Data
  • CUDA Programming
  • Game and Virtual reality

Download Nokte as Free


Coming Soon....

Greatest hits

Anyone who has never made a mistake has never tried anything new.

Albert Einstein

Imagination is more important than knowledge.

Albert Einstein

The fear of death is the most unjustified of all fears, for there’s no risk of accident for someone who’s dead.

Albert Einstein

One day you will wake up and there won’t be any more time to do the things you’ve always wanted. Do it now.

Paulo Coelho

Gravitation is not responsible for people falling in love.

Albert Einstein

Waiting hurts. Forgetting hurts. But not knowing which decision to take can sometimes be the most painful.

Paulo Coelho

You are what you believe yourself to be.

Paulo Coelho

It’s the possibility of having a dream come true that makes life interesting.

Paulo Coelho


Site by images
Statistics
  • 2,319
  • 10,075
  • 92,535
  • 25,499
Recent News Posts