A new version of GPU Caps Viewer (OpenGL, Vulkan, OpenCL and CUDA utility) is available.
1 – Overview
GPU Caps Viewer 1.34.0 adds the support of the latest GeForce GTX 1080 Ti, Radeon RX 580, RX 570 and RX 560 (based on Polaris 10/11 GPUs) as well as Radeon Pro WX 7100, WX 5100 and WX 4100.
New Vulkan and OpenGL demos (based on GeeXLab engine) have been added in the 3D demos panel:
GpuCapsViewer.exe /demo_win_width=1920 /demo_win_height=1080
/benchmark_log_results /benchmark_duration=10000 /demo_msaa=0
Command line parameters can be found in the _run_geexlab_benchmark.bat file available in GPU Caps Viewer folder. The benchmark results are saved in a CSV file (_gxl_benchmark_results.csv).
Another change brought to GPU Caps concerns the UAC (User Account Control) execution level that has been reset to the default value (as invoker). The previous UAC level (administrator) didn’t work well with launching demos via the command line.
>> Update: 2017.04.29 <<
GPU Caps Viewer 220.127.116.11 is a maintenance release and comes with a new VK-Z version. VS2015 generates some strange patterns in the 32-bit version of VK-Z and no matter the way I compile VK-Z, the 32-bit version of VK-Z is still flagged as infected by some antivirus. The 64-bit version of VK-Z is clean so I just replaced the 32-bit version by the 64-bit one. GPU Caps Viewer has been updated to execute VK-Z x64 on Windows 64-bit only…
>> Update: 2017.04.26 <<
GPU Caps Viewer 18.104.22.168 is a maintenance release and improves the detection of recent AMD Radeon GPUs (RX 580, RX 570, RX 480 and RX 470). GPU Shark has been updated to version 0.9.11.4. VK-Z has been updated to version 0.6.0.5 32-bit (it has been recompiled with other compilation options in order to avoid false positive detection with antivirus. I really don’t understand this pesky problem!).
>> Update: 2017.04.16 <<
GPU Caps Viewer 22.214.171.124 is available with VK-Z 0.6.0.3. The previous 32-bit version of VK-Z was detected as infected by some antivirus scanners. Now VK-Z 32-bit is clean for Nod32, Avira, Avast, Kaspersky, AVG, Norton, and MacAfee.
>> Update: 2017.04.14 <<
GPU Caps Viewer 126.96.36.199 adds the support of NVIDIA TITAN Xp. GPU Shark and VK-Z have been updated to their latest versions.
>> Update: 2017.04.01 <<
GPU Caps Viewer 188.8.131.52 improves the detection of AMD Radeon GPUs in some systems that include Intel iGPU + AMD Radeon GPU. A new GeeXLab/OpenGL demo has been added (Alien Corridor).
2 – Dowloads
2.1 – Portable version (zip archive – no installation required):
2.2 – Win32 installer:
For any feedback or bug report, a thread on Geeks3D forums is available HERE.
3 – What is GPU Caps Viewer?
GPU Caps Viewer is a graphics card information utility focused on the OpenGL, Vulkan, OpenCL and CUDA API level support of the main (primary) graphics card. For Vulkan, OpenCL and CUDA, GPU Caps Viewer details the API support of each capable device available in the system. GPU Caps Viewer offers also a simple GPU monitoring facility (clock speed, temperature, GPU usage, fan speed) for NVIDIA GeForce and AMD Radeon based graphics cards. GPU data can be submitted to an online GPU database.
4 – Changelog
Version 184.108.40.206 – 2017.04.29
! replaced VK-Z 0.6.0.5 32-bit by VK-Z 0.6.0.3 64-bit.
VK-Z 64-bit does not produce false positive with some
antivirus. VK-Z 64-bit is only executed on Windows 64-bit.
Version 220.127.116.11 – 2017.04.20
! improved detection of Radeon RX 580, RX 570,
RX 480 and RX 470.
! updated: GPU Shark 0.9.11.4
! updated: VK-Z 0.6.0.5
! updated: ZoomGPU 1.20.4 (GPU monitoring library).
Version 18.104.22.168 – 2017.04.16
! updated: VK-Z 0.6.0.3
Version 22.214.171.124 – 2017.04.14
+ added support of NVIDIA TITAN Xp.
! updated Radeon RX 560 shader cores.
! updated: VK-Z 0.6.0
! updated with GeeXLab SDK libs.
! updated: GPU Shark 0.9.11.3
! updated: ZoomGPU 1.20.3 (GPU monitoring library).
Version 126.96.36.199 – 2017.04.01
! improves the detection of AMD Radeon GPUs in some systems
that include Intel iGPU + AMD Radeon GPU.
+ added new OpenGL demo (GeeXLab): Alien Corridor (based on this demo).
Command line demo codename: gl21_shadertoy_mp_alien_corridor
! updated: GPU Shark 0.9.11.2
! updated: ZoomGPU 1.20.2 (GPU monitoring library).
Version 188.8.131.52 – 2017.03.25
+ added support of the GeForce GTX 1080 Ti.
+ added support of AMD Radeon RX 580, RX 570 and RX 560.
+ added support of AMD Radeon Pro WX 7100, WX 5100, WX4100,
WX 4150 and WX 4130.
+ added initial support of AMD Polaris 12 and Vega 10 based videocards.
+ added new parameters for launching GeeXLab demos via the command line.
+ added new Vulkan and OpenGL demos (GeeXLab): Vulkan geomechanical (based on this demo), OpenGL rainforest (based on this demo), OpenGL radialblur (based on this demo), OpenGL rhodium (based on this demo), OpenGL cell shading, OpenGL geometry instancing, OpenGL gs mesh exploder.
! set UAC (User Account Control) execution level to as invoker.
! recompiled with latest Vulkan API headers (v1.0.45).
! updated with latest GeeXLab SDK libs.
! updated: GPU Shark 0.9.11.1
! updated: ZoomGPU 1.20.1 (GPU monitoring library).
The TITAN Xp is based on a full GP102 GPU with 3840 CUDA cores while the TITAN X has only 3584 CUDA cores. The TITAN Xp has 240 texture units while the TITAN X has 224 texture units. Both cards have the same number of ROPs (96) and the same amount of memory: 12GB of GDDR5X.
On the physical side, there are few differences between the TITAN Xp and the TITAN X. NVIDIA has not changed the name on the VGA shroud: TITAN X for both cards…
The only things that distinguish both cards are the box, the PCB color (brown for the Xp) and the DVI connector (the Xp has no DVI connector):
The TITAN Xp alone:
Today NVIDIA released a major update of the JetPack SDK with new developer tools and libraries that doubles the performance of deep learning applications on the Jetson TX1 Developer Kit, the world’s highest performance platform for deep learning on embedded systems.
JetPack 2.3 is available as a free download and is focused on making it easier for developers to add complex AI and deep learning capabilities to intelligent machines. This update includes the new TensorRT deep learning inference engine, the latest versions of CUDA 8 and cuDNN 5.1, and tighter camera and multimedia integration to easily add complex AI and deep learning abilities to intelligent machines.
In addition, NVIDIA announced a new partnership with Leopard Imaging Inc., a Jetson Preferred Partner that specializes in the creation of camera solutions. The new camera API included in the JetPack 2.3 release delivers enhanced functionality to ease developer integration.
Download JetPack 2.3 today.
- The efficiency was measured using the methodology outlined in the whitepaper.
- Jetson TX1 efficiency is measured at GPU frequency of 691 MHz.
- Intel Core i7-6700k efficiency was measured for 4 GHz CPU clock.
- GoogLeNet batch size was limited to 64 as that is the maximum that could run with Jetpack 2.0. With Jetpack 2.3 and TensorRT, GoogLeNet batch size 128 is also supported for higher performance.
- FP16 results for Jetson TX1 are comparable to FP32 results for Intel Core i7-6700k as FP16 incurs no classification accuracy loss over FP32.
- Latest publicly available software versions of IntelCaffe and MKL2017 beta were used.
- For Jetpack 2.0 and Intel Core i7, non-zero data was used for both weights and input images. For Jetpack 2.3 (TensorRT) real images and weights were used.
Australian scientists made a significant discovery hiding behind the world-famous Great Barrier Reef. The discovery was made using cutting-edge surveying technology, which revealed vast fields of doughnut-shaped mounds measuring up to 300 meters across and up to 10 meters deep.
“We’ve known about these geological structures in the northern Great Barrier Reef since the 1970s and 80s, but never before has the true nature of their shape, size and vast scale been revealed,” said Dr Robin Beauman of James Cook University, who helped lead the research.
The scientists from James Cook University, Queensland University of Technology, and University of Sydney used LiDAR data collected from the Australian Navy to help reveal this deeper, subtler reef. They then used CUDA and GeForce GTX 1080 GPUs to compile and visualize the huge 3D bathymetry datasets.
“Having a high-performance GPU has been critical to this ocean mapping research,” says Beauman.
The discovery has opened up many other new avenues of research.
“For instance, what do the 10-20 meter thick sediments of the bioherms tell us about past climate and environmental change on the Great Barrier Reef over this 10,000 year time-scale? And, what is the finer-scale pattern of modern marine life found within and around the bioherms now that we understand their true shape?”
Next up, the researchers plan to employ autonomous underwater vehicle technologies to unravel the physical, chemical and biological processes of the structures.
Researchers at the Harvard Biorobotics Laboratory are harnessing the power of GPUs to generate real-time volumetric renderings of patients’ hearts. The team has built a robotic system to autonomously steer commercially available cardiac catheters that can acquire ultrasound images from within the heart. They tested their system in the clinic and reported their results at the 2016 IEEE International Conference on Robotics and Automation (ICRA) in Stockholm, Sweden.
The team used an Intracardiac Echocardiography (ICE) catheter, which is equipped with an ultrasound transducer at the tip, to acquire 2D images from within a beating heart. Using NVIDIA GPUs, the team was able to reconstruct a 4D (3D + time) model of the heart from these ultrasound images.
Generating a 4D volume begins with co-registering ultrasound images that are acquired at different imaging angles but at the same phase of the cardiac cycle. The position and rotation of each image with respect to the world coordinate frame is measured using electromagnetic (EM) trackers that are attached to the catheter body. This point cloud is then discretized to lie on a 3D grid. Next, infilling is performed to fill the gaps between the slices, generating a dense volumetric representation of the heart. Finally, the volumes are displayed to the surgeon using volume rendering via raycasting, leveraging the CUDA – OpenGL interoperability. The team accelerated the volume reconstruction and rendering algorithms using two NVIDIA TITAN GPUs.
“ICE catheters are currently seldom used due to the difficulty in manual steering,” said principal investigator Prof. Robert D. Howe, Abbott and James Lawrence Professor of Engineering at Harvard University. “Our robotic system frees the clinicians of this burden, and presents them with a new method of real-time visualization that is safer and higher quality than the X-ray imaging that is used in the clinic. This is an enabling technology that can lead to new procedures that were not possible before, as well as improving the efficacy of the current ones.”
Providing real-time procedure guidance requires the use of efficient algorithms combined with a high-performance computing platform. Images are acquired at up to 60 frames per second from the ultrasound machine. Generating volumetric renderings from these images in real-time is only possible using GPUs.
NVIDIA TITAN X
THE WORLD’S MOST ADVANCED GPU ARCHITECTURE
GeForce GTX 10-series graphics cards are powered by Pascal to deliver up to 3x the performance of previous-generation graphics cards, plus innovative new gaming technologies and breakthrough VR experiences.
Irresponsible Amount of Performance
We packed the most raw horsepower we possibly could into this GPU. Driven by 3584 NVIDIA CUDA® cores running at 1.5GHz, TITAN X packs 11 TFLOPs of brute force. Plus it’s armed with 12 GB of GDDR5X memory – one of the fastest memory technologies in the world.
|NVIDIA TITAN X|
|Frame Buffer||12 GB G5X|
|Memory Speed||10 Gbps|
|Boost Clock||1531 MHz|
TITAN X is crafted to offer superior heat dissipation using vapor chamber cooling technology in a die cast aluminum body. It’s a powerful combination of brilliant efficiency, stunning design, and industry-leading performance.
NVIDIA SLI® TECHNOLOGY
GEFORCE GTX SLI HB BRIDGE
NVIDIA’s new SLI bridge doubles the available transfer bandwidth compared to the NVIDIA Maxwell™ architecture. Delivering silky-smooth gameplay, it’s the best way to experience surround gaming—and it’s only compatible with the NVIDIA TITAN X, GeForce GTX 1080 and GeForce GTX 1070 graphics cards.
Limit 1 per customer
With our planet getting warmer and warmer, and carbon dioxide levels steadily creeping up, companies are using deep learning to help cope with the effects that climate change is having on their crops.
An article on MIT Technology Review highlights PEAT, a German company using CUDA, TITAN X GPUs and the cuDNN-accelerated Caffe deep learning framework to provide farmers with a plant disease and diagnostics management tool. Farmers are able to take a picture of their affected plants, upload it to PEAT’s “Plantix” mobile app and get treatment recommendations within seconds. The database currently contains information on 52 crops worldwide and the ability to detect 160 plant diseases, pests and nutrient deficiencies with 95% accuracy.
As mobile phones are now ubiquitous throughout the developing world, this solution provides the last-mile connectivity that farmers need to deal with the impact of a changing climate.
Last Thursday at the International Conference on Machine Learning (ICML) in New York, Facebook announced a new piece of open source software aimed at streamlining and accelerating deep learning research. The software, named Torchnet, provides developers with a consistent set of widely used deep learning functions and utilities. Torchnet allows developers to write code in a consistent manner speeding development and promoting code re-use both between experiments and across multiple projects.
Torchnet sits atop the popular Torch deep learning framework benefits from GPU acceleration using CUDA and cuDNN. Further, Torchnet has built-in support for asynchronous, parallel data loading and can make full use of multiple GPUs for vastly improved iteration times. This automatic support or multi-GPU training helps Torchnet take full advantage of powerful systems like the NVIDIA DGX-1 with its eight Tesla P100 GPUs.
According to the Torchnet research paper, its modular design makes it easy to re-use code in a series of experiments. For instance, running the same experiments on a number of different datasets is accomplished simply by plugging in different dataloaders. And the evaluation criterion can be changed easily by plugging in a different performance meter.
Torchnet adds another powerful tool to data scientists’ toolkit and will help speed the design and training of neural networks, so they can focus on their next great advancement.
The Tampa Bay Buccaneers unveiled a state-of-the-art experience to provide fans with the ability to virtually sample the new gameday enhancements that will debut beginning with the team’s 2016 season.
While the new Raymond James Stadium is under construction, current and prospective ticket holders can use a virtual reality headset to experience the new stadium before it opens in September. The realistic preview is also valuable in attracting potential sponsors by helping executives visualize their company’s logo inside the stadium.
Brian Killingsworth, chief marketing officer of the Buccaneers, said the Bucs are the first professional sports team to integrate video and a “three-dimensional environment with full freedom of motion touring capabilities.”
“We’re seeing a massive opportunity to leverage the technology to give a view from the seats from a season-ticket perspective, and of course corporate sponsors are really important to the finances of a particular team. When folks have their first opportunity to experience virtual reality, that alone helps qualify a sales pitch,” MVP Interactive CEO James Giglio said.
The painting and drawing tools most people use are 2D, but now a new project gives artists the ability to choose any brush they like, a limitless array of paint colors, and use the same natural twists and turns of the brush to create the rich textures of oil painting, all on a digital canvas.
Delivering such a realistic, physically-based painting tool requires some heavy-duty computational power, so Adobe Research collaborated with NVIDIA to create the world’s first real-time simulation-based 3D painting system with bristle-level interactions entirely with CUDA. Adobe Researchers Zhili Chen and Byungmoon Kim originally developed Project Wetbrush in 2015, but and have collaborated with NVIDIA software experts to optimize their application performance, allowing them to add even more GPU-accelerated features to the system.
This is just the beginning for the project. Using deep learning, some of the most computationally challenging physical simulations could potentially be added to create more responsive and realistic brush dynamics, or the system could even learn from itself.
- Resume Full name Sayed Ahmadreza Razian Nationality Iran Age 36 (Sep 1982) Website ahmadrezarazian.ir Email ...
- Shangul Mangul HabeAngur Shangul Mangul HabeAngur (City of Goats) is a game for child (4-8 years). they learn how be useful in the city and respect to people. Persian n...
- معرفی نام و نام خانوادگی سید احمدرضا رضیان محل اقامت ایران - اصفهان سن 33 (متولد 1361) پست الکترونیکی firstname.lastname@example.org درجات علمی...
- Tianchi-The Purchase and Redemption Forecasts 2015 Special Prize – Tianchi Golden Competition (2015) “The Purchase and Redemption Forecasts” in Big data (Alibaba Group) Among 4868 teams. Introd...
- Drowning Detection by Image Processing In this research, I design an algorithm for image processing of a swimmer in pool. This algorithm diagnostics the swimmer status. Every time graph sho...
- Nokte – نکته نرم افزار کاربردی نکته نسخه 1.0.8 (رایگان) نرم افزار نکته جهت یادداشت برداری سریع در میزکار ویندوز با قابلیت ذخیره سازی خودکار با پنل ساده و کم ح...
- Tianchi-Brick and Mortar Store Recommendation with Budget Constraints Ranked 5th – Tianchi Competition (2016) “Brick and Mortar Store Recommendation with Budget Constraints” (IJCAI Socinf 2016-New York,USA)(Alibaba Group...
- 1st National Conference on Computer Games-Challenges and Opportunities 2016 According to the public relations and information center of the presidency vice presidency for science and technology affairs, the University of Isfah...
- 3rd International Conference on The Persian Gulf Oceanography 2016 Persian Gulf and Hormuz strait is one of important world geographical areas because of large oil mines and oil transportation,so it has strategic and...
- 2nd Symposium on psychological disorders in children and adolescents 2016 2nd Symposium on psychological disorders in children and adolescents 2016 Faculty of Nursing and Midwifery – University of Isfahan – 2 Aug 2016 - Ass...
- Optimizing raytracing algorithm using CUDA Abstract Now, there are many codes to generate images using raytracing algorithm, which can run on CPU or GPU in single or multi-thread methods. In t...
- My City This game is a city simulation in 3d view. Gamer must progress the city and create building for people. This game is simular the Simcity.
- AMD Ryzen Downcore Control AMD Ryzen 7 processors comes with a nice feature: the downcore control. This feature allows to enable / disabl...
- Detecting and Labeling Diseases in Chest X-Rays with Deep Learning Researchers from the National Institutes of Health in Bethesda, Maryland are using NVIDIA GPUs and deep learni...
- کودا – CUDA کودا به انگلیسی (CUDA) که مخفف عبارت انگلیسی Compute Unified Device Architecture است یک سکوی پردازش موازی و مد...
- Deep Learning for Computer Vision with MATLAB and cuDNN Deep learning is becoming ubiquitous. With recent advancements in deep learning algorithms and GPU technology...
- Fallout 4 Patch 1.3 Adds NVIDIA HBAO+ and FleX-Powered Weapon Debris Fallout 4 launched last November to record player numbers, swiftly becoming the most popular third-party game...
- Automatic Colorization Automatic Colorization of Grayscale Images Researchers from the Toyota Technological Institute at Chicago and University of Chicago developed a fully aut...
- Diagnosing Cancer with Deep Learning and GPUs Using GPU-accelerated deep learning, researchers at The Chinese University of Hong Kong pushed the boundaries...
- NVIDIA TITAN Xp vs TITAN X NVIDIA has more or less silently launched a new high end graphics card around 10 days ago. Here are some pictu...
- IBM Watson Chief Technology Officer Rob High to Speak at GPU Technology Conference Highlighting the key role GPUs will play in creating systems that understand data in human-like ways, Rob High...
- Back to Dinosaur Island Back to Dinosaur Island takes advantage of 15 years of CRYENGINE development to show users the sheer endless p...
- Assisting Farmers with Artificial Intelligence With our planet getting warmer and warmer, and carbon dioxide levels steadily creeping up, companies are using...
- Real-Time Pedestrian Detection using Cascades of Deep Neural Networks Google Research presents a new real-time approach to object detection that exploits the efficiency o...
- Unity – What’s new in Unity 5.3.3 The Unity 5.3.3 public release brings you a few improvements and a large number of fixes. Read the release not...
- Draft the Ultimate Team in FIFA 16 FUT Draft showcases some excellent mechanics as you attempt to win four matches in a row with your hand-picked...
- Detecting Objects from Space with Artificial IntelligenceTo reveal deeper insights into important activities taking place around …
- کودا – CUDAکودا به انگلیسی (CUDA) که مخفف عبارت انگلیسی Compute Unified …
- eSports Experiencing Unprecedented GrowthMarch does not just belong to basketball anymore. While much …
- In-depth introduction to machine learning in 15 hours of expert videosIn January 2014, Stanford University professors Trevor Hastie and Rob …
- Unity – What’s new in Unity 5.3.3The Unity 5.3.3 public release brings you a few improvements …
- Those classic Atari games were harder than you thinkSo a computer program has learned how to play classic …
- Python Primer for the ImpatientPython is one of the scripting languages supported by GLSL …
- NVIDIA Announcements at the 2016 GPU Technology ConferenceIf you missed the opening keynote by NVIDIA CEO Jen-Hsun …
- Diagnosing Cancer with Deep Learning and GPUsUsing GPU-accelerated deep learning, researchers at The Chinese University of …
- NVIDIA Launches Quadro M6000 with 24GB of Graphics MemoryNVIDIA has launched a new version of the Quadro M6000, …