Artificial Intelligence
Potential of Virtual Reality in the Future
Virtual Reality is one of the technologies with the highest projected potential for growth. According to the latest forecasts from IDC Research (2018), investment in VR and AR will multiply 21-fold over the next four years, reaching 15.5 billion euros by 2022.
Microsoft finds underwater datacenters are reliable, practical and use energy sustainably
Microsoft wrapped up Phase 2 of Project Natick, its plan to test the viability of underwater data centers.
Microsoft finds underwater datacenters are reliable, practical, and use energy sustainably
The Future of Virtual Reality After The Pandemic (Covid19)
Coronavirus could be the catalyst to reinvigorate virtual reality headsets.
The future of Virtual Reality is here. VR gaming after the global pandemic is expected to be great. The VR market value in gaming is expected to rise up to 40 billion USD by 2026. Virtual Reality technology, apps, and games are pushing the VR industry forward. But a real game-changer is going to be the release of 3D interactive VR movies.
JetPack Latest NVIDIA JetPack Developer Tools Will Double Your Deep Learning Performance
From self-driving cars to medical diagnostics, deep learning powered artificial intelligence is impacting nearly every industry.
In 2015, NVIDIA’s Deep Learning Institute delivered more than 16,000 hours of training to help data scientists and developers master this burgeoning field of AI – and the need for deep learning training is rapidly growing.
In the next four months developers can take more than 80 instructor-led workshops and hands-on labs at one of the eight GPU Technology Conferences around the world – starting this week at GTC China.
“We want to share all our knowledge about deep learning with the world so others can create amazing things with it,” said Mark Ebersole, director of the institute.

Julie Bernauer, an NVIDIA Deep Learning Institute instructor, teaches a class on deep learning on GPUs.
The Deep Learning Institute has joined forces with three industry-leading organizations to train data scientists and developers interested in deep learning:
- Teaming up with Coursera to create a series of courses on how deep learning is poised to transform healthcare
- Collaborating with Microsoft on a hands-on workshop about how to use deep learning to create smarter robots
- Partnering with Udacity to help developers learn how to build a self-driving car
GPUs Help Find a Massive New Reef Hiding Behind Great Barrier Reef
Australian scientists made a significant discovery hiding behind the world-famous Great Barrier Reef. The discovery was made using cutting-edge surveying technology, which revealed vast fields of doughnut-shaped mounds measuring up to 300 meters across and up to 10 meters deep.
“We’ve known about these geological structures in the northern Great Barrier Reef since the 1970s and 80s, but never before has the true nature of their shape, size and vast scale been revealed,” said Dr Robin Beauman of James Cook University, who helped lead the research.
The scientists from James Cook University, Queensland University of Technology, and University of Sydney used LiDAR data collected from the Australian Navy to help reveal this deeper, subtler reef. They then used CUDA and GeForce GTX 1080 GPUs to compile and visualize the huge 3D bathymetry datasets.
“Having a high-performance GPU has been critical to this ocean mapping research,” says Beauman.

North-westerly view of the Bligh Reef area off Cape York. Depths are colored red (shallow) to blue (deep), over a depth range of about 50 meters. Bathymetry data from Australian Hydrographic Service.
The discovery has opened up many other new avenues of research.
“For instance, what do the 10-20 meter thick sediments of the bioherms tell us about past climate and environmental change on the Great Barrier Reef over this 10,000 year time-scale? And, what is the finer-scale pattern of modern marine life found within and around the bioherms now that we understand their true shape?”
Next up, the researchers plan to employ autonomous underwater vehicle technologies to unravel the physical, chemical and biological processes of the structures.
Deep Learning to Unlock Mysteries of Parkinson’s Disease
Researchers at The Australian National University are using deep learning and NVIDIA technologies to better understand the progression of Parkinson’s disease.
Currently it is difficult to determine what type of Parkinson’s someone has or how quickly the condition will progress.
The study will be conducted over the next five years at the Canberra Hospital in Australia and will involve 120 people suffering from the disease and an equal number of non-sufferers as a controlled group.
“There are different types of Parkinson’s that can look similar at the point of onset, but they progress very differently,” says Dr Deborah Apthorp of the ANU Research School of Psychology. “We are hoping the information we collect will differentiate between these different conditions.”

Researchers Alex Smith (L) and Dr Deborah Anthrop (R) work with Parkinson’s disease sufferer Ken Hood (middle).
Dr Apthorp said the research will measure brain imaging, eye tracking, visual perception and postural sway.
From the data collected during the study, the researchers will be using a GeForce GTX 1070 GPU and cuDNN to train their deep learning models to help find patterns that indicate degradation of motor function correlating with Parkinson’s.
The researchers plan to incorporate virtual reality into their work by having the sufferers’ wear head-mounted displays (HMDs), which will help them better understand how self-motion perception is altered in Parkinson’s disease, and use stimuli that mimics the visual scene during self-motion.
“Additionally, we would like to explore the use of eye tracking built into HMDs, which is a much lower cost alternative to a full research eye tracking system and reduces the amount of equipment into a highly portable and versatile single piece of equipment,” says researcher Alex Smith.
GPUs Help Cut Siri’s Error Rate by Half
To make Siri great, Apple employed several artificial intelligence experts three years ago to apply deep learning to their intelligent mobile smart assistant.
The team began training a neural net to replace the original Siri. “We have the biggest and baddest GPU farm cranking all the time,” says Alex Acero, who heads the speech team.
“The error rate has been cut by a factor of two in all the languages, more than a factor of two in many cases,” says Acero. “That’s mostly due to deep learning and the way we have optimized it.”
Besides Siri, Apple’s adoption of deep learning and neural nets are now found all over their products and services — including fraud detection on the Apple store, facial recognition and locations in your photos, and to help identify the most useful feedback from thousands of reports from beta testers.
“The typical customer is going to experience deep learning on a day-to-day level that [exemplifies] what you love about an Apple product,” says Phil Schiller, senior vice president of worldwide marketing at Apple. “The most exciting [instances] are so subtle that you don’t even think about it until the third time you see it, and then you stop and say, How is this happening?”
Teaching an AI to Detect Key Actors in Multi-person Videos
Researchers from Google and Stanford have taught their computer vision model to detect the most important person in a multi-person video scene – for example, who the shooter is in a basketball game which typically contains dozens or hundreds of people in a scene.
Using 20 Tesla K40 GPUs and the cuDNN-accelerated Tensorflow deep learning framework to train their recurrent neural network on 257 NCAA basketball games from YouTube, an attention mask selects which of the several people are most relevant to the action being performed, then tracks relevance of each object as time proceeds. The team published a paper detailing more of their work.

The distribution of attention for the model with tracking, at the beginning of “free-throw success”. The attention is concentrated at a specific defender’s position. Free-throws have a distinctive defense formation, and observing the defenders can be helpful as shown in the sample images in the top row.
Over time the system can identify not only the most important actor, but potential important actors and the events with which they are associated – such as, the ability to understand the player going up for a layup could be important, but that the most important player is the one who then blocks the shot.
New Deep Learning Method Enhances Your Selfies
Researchers from Adobe Research and The Chinese University of Hong Kong created an algorithm that automatically separates subjects from their backgrounds so you can easily replace the background and apply filters to the subject.
Their research paper mentions there are good user-guided tools that support manually creating masks to separate subjects from the background, but the “tools are tedious and difficult to use, and remain an obstacle for casual photographers who want their portraits to look good.”

A highly accurate automatic portrait segmentation method allows many portrait processing tools to be fully automatic.
Using a TITAN X GPU and the cuDNN-accelerated Caffe deep learning framework, the researchers trained their convolutional neural network on 1,800 portrait images from Flickr. Their GPU-accelerated method was 20x faster than a CPU-only approach.
Portrait video segmentation is next on the radar for the researchers.
Advanced Real-Time Visualization for Robotic Heart Surgery
Researchers at the Harvard Biorobotics Laboratory are harnessing the power of GPUs to generate real-time volumetric renderings of patients’ hearts. The team has built a robotic system to autonomously steer commercially available cardiac catheters that can acquire ultrasound images from within the heart. They tested their system in the clinic and reported their results at the 2016 IEEE International Conference on Robotics and Automation (ICRA) in Stockholm, Sweden.
The team used an Intracardiac Echocardiography (ICE) catheter, which is equipped with an ultrasound transducer at the tip, to acquire 2D images from within a beating heart. Using NVIDIA GPUs, the team was able to reconstruct a 4D (3D + time) model of the heart from these ultrasound images.
Generating a 4D volume begins with co-registering ultrasound images that are acquired at different imaging angles but at the same phase of the cardiac cycle. The position and rotation of each image with respect to the world coordinate frame is measured using electromagnetic (EM) trackers that are attached to the catheter body. This point cloud is then discretized to lie on a 3D grid. Next, infilling is performed to fill the gaps between the slices, generating a dense volumetric representation of the heart. Finally, the volumes are displayed to the surgeon using volume rendering via raycasting, leveraging the CUDA – OpenGL interoperability. The team accelerated the volume reconstruction and rendering algorithms using two NVIDIA TITAN GPUs.
“ICE catheters are currently seldom used due to the difficulty in manual steering,” said principal investigator Prof. Robert D. Howe, Abbott and James Lawrence Professor of Engineering at Harvard University. “Our robotic system frees the clinicians of this burden, and presents them with a new method of real-time visualization that is safer and higher quality than the X-ray imaging that is used in the clinic. This is an enabling technology that can lead to new procedures that were not possible before, as well as improving the efficacy of the current ones.”
Providing real-time procedure guidance requires the use of efficient algorithms combined with a high-performance computing platform. Images are acquired at up to 60 frames per second from the ultrasound machine. Generating volumetric renderings from these images in real-time is only possible using GPUs.
-
Resume Full name Sayed Ahmadreza Razian Age 38 (Sep 1982) Website ahmadrezarazian.ir Email ahmadrezarazian@gmail.com...
-
Shangul Mangul HabeAngur Shangul Mangul HabeAngur (City of Goats) is a game for child (4-8 years). they learn how be useful in the city and respect to people. Persian n...
-
Drowning Detection by Image Processing In this research, I design an algorithm for image processing of a swimmer in pool. This algorithm diagnostics the swimmer status. Every time graph sho...
-
معرفی نام و نام خانوادگی سید احمدرضا رضیان پست الکترونیکی ahmadrezarazian@gmail.com درجات علمی کارشناسی : ریاضی کاربردی – دانشگاه اصفهان (معدل 14...
-
Tianchi-The Purchase and Redemption Forecasts 2015 Special Prize – Tianchi Golden Competition (2015) “The Purchase and Redemption Forecasts” in Big data (Alibaba Group) Among 4868 teams. Introd...
-
Nokte – نکته نرم افزار کاربردی نکته نسخه 1.0.8 (رایگان) نرم افزار نکته جهت یادداشت برداری سریع در میزکار ویندوز با قابلیت ذخیره سازی خودکار با پنل ساده و کم ح...
-
Tianchi-Brick and Mortar Store Recommendation with Budget Constraints Ranked 5th – Tianchi Competition (2016) “Brick and Mortar Store Recommendation with Budget Constraints” (IJCAI Socinf 2016-New York,USA)(Alibaba Group...
-
1st National Conference on Computer Games-Challenges and Opportunities 2016 According to the public relations and information center of the presidency vice presidency for science and technology affairs, the University of Isfah...
-
Optimizing raytracing algorithm using CUDA Abstract Now, there are many codes to generate images using raytracing algorithm, which can run on CPU or GPU in single or multi-thread methods. In t...
-
2nd Symposium on psychological disorders in children and adolescents 2016 2nd Symposium on psychological disorders in children and adolescents 2016 Faculty of Nursing and Midwifery – University of Isfahan – 2 Aug 2016 - Ass...
-
My City This game is a city simulation in 3d view. Gamer must progress the city and create building for people. This game is simular the Simcity.
-
ببین و بپر به زودی.... لینک صفحه : http://bebinbepar.ir
-
Environmental Education Software In this game , Kids learn that They must respect to Nature and the Environment. This game was created in 3d . 600x420 (0x0) 66.45 KB ...
-
SVM Review On this review, i compare 4 papers about 4 famous models by SVM. These models are : Maximum Likelihood Classification (ML) Backpropagatio...
-
Watching Jumping Coming Soon... Visit at : http://bebinbepar.ir/
- AMD Ryzen Downcore Control AMD Ryzen 7 processors comes with a nice feature: the downcore control. This feature allows to enable / disabl...
- Fallout 4 Patch 1.3 Adds NVIDIA HBAO+ and FleX-Powered Weapon Debris Fallout 4 launched last November to record player numbers, swiftly becoming the most popular third-party game...
- Detecting and Labeling Diseases in Chest X-Rays with Deep Learning Researchers from the National Institutes of Health in Bethesda, Maryland are using NVIDIA GPUs and deep learni...
- NVIDIA TITAN Xp vs TITAN X NVIDIA has more or less silently launched a new high end graphics card around 10 days ago. Here are some pictu...
- Automatic Colorization Automatic Colorization of Grayscale Images Researchers from the Toyota Technological Institute at Chicago and University of Chicago developed a fully aut...
- Back to Dinosaur Island Back to Dinosaur Island takes advantage of 15 years of CRYENGINE development to show users the sheer endless p...
- Unity – What’s new in Unity 5.3.3 The Unity 5.3.3 public release brings you a few improvements and a large number of fixes. Read the release not...
- ASUS GeForce GTX 1080 TURBO Review This GTX 1080 TURBO is the simplest GTX 1080 I tested. By simplest, I mean the graphics card comes with a simp...
- کودا – CUDA کودا به انگلیسی (CUDA) که مخفف عبارت انگلیسی Compute Unified Device Architecture است یک سکوی پردازش موازی و مد...
- Assisting Farmers with Artificial Intelligence With our planet getting warmer and warmer, and carbon dioxide levels steadily creeping up, companies are using...
- Diagnosing Cancer with Deep Learning and GPUs Using GPU-accelerated deep learning, researchers at The Chinese University of Hong Kong pushed the boundaries...
- Diablo Meets Dark Souls in Isometric Action-RPG Eitr Among the indie games Sony showcased during its E3 press conference this week, Eitr was what most stood out to...
- Virtual Reality in the Military Virtual reality has been adopted by the military – this includes all three services (army, navy and air force)...
- Head-mounted Displays (HMD) Head-mounted displays or HMDs are probably the most instantly recognizable objects associated with virtual rea...
- What’s New in OpenGL 4.5 and OpenGL Shading Language 4.5The OpenGL 4.5 and OpenGL Shading Language 4.50 Specifications were …
- What is Direct3D 12?DirectX 12 introduces the next version of Direct3D, the 3D …
- Microsoft Surface Duo, Dual-Screen, Infinite possibilitiesDo more than ever before on the new Surface Duo, …
- Cleaning Up Radioactive Waste from World War II With SupercomputingThe Handford site in southeastern Washington is the largest radioactive …
- Using Virtual Reality at the IBM Watson Image Recognition HackathonFive teams of developers gathered at the Silicon Valley Virtual …
- New Updates to the NVIDIA Deep Learning SDK Now Helps Accelerate InferenceThe latest update to the NVIDIA Deep Learning SDK includes …
- World’s First Real-Time 3D Oil Painting SimulatorThe painting and drawing tools most people use are 2D, …
- GPUs Help Measure Rising Sea Levels in Real-TimeA group of researchers from Chalmers University of Technology in …
- JetPack Latest NVIDIA JetPack Developer Tools Will Double Your Deep Learning PerformanceFrom self-driving cars to medical diagnostics, deep learning powered artificial …
- It Takes Two – Official Reveal Trailer by EAHop on the wildest trip of your life in It …