Robot
Share Your Science: Artificial Intelligent Robot for Children
Yi-Jian Wu, Founder & CEO of Yuanqu Tech in China, talks about how NVIDIA Tesla GPUs are being used to train their interactive educational robot for children. Call the robot’s name and the speech-controlled robot is able to tell jokes, answer educational questions, teach English and act as a patient tutor for a child.
For more information visit http://www.yuanqutech.com.
Share your GPU-accelerated science with us at http://nvda.ly/Vpjxr and with the world on #ShareYourScience.
Watch more scientists and researchers share how accelerated computing is benefiting their work at http://nvda.ly/X7WpH
Enabling human-robot rescue team
System could help prevent robots from overwhelming human teammates with information.
Autonomous robots performing a joint task send each other continual updates: “I’ve passed through a door and am turning 90 degrees right.” “After advancing 2 feet I’ve encountered a wall. I’m turning 90 degrees right.” “After advancing 4 feet I’ve encountered a wall.” And so on.
Computers, of course, have no trouble filing this information away until they need it. But such a barrage of data would drive a human being crazy.
At the annual meeting of the Association for the Advancement of Artificial Intelligence last weekend, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) presented a new way of modeling robot collaboration that reduces the need for communication by 60 percent. They believe that their model could make it easier to design systems that enable humans and robots to work together — in, for example, emergency-response teams.
“We haven’t implemented it yet in human-robot teams,” says Julie Shah, an associate professor of aeronautics and astronautics and one of the paper’s two authors. “But it’s very exciting, because you can imagine: You’ve just reduced the number of communications by 60 percent, and presumably those other communications weren’t really necessary toward the person achieving their part of the task in that team.”
The work could have also have implications for multirobot collaborations that don’t involve humans. Communication consumes some power, which is always a consideration in battery-powered devices, but in some circumstances, the cost of processing new information could be a much more severe resource drain.
In a multiagent system — the computer science term for any collaboration among autonomous agents, electronic or otherwise — each agent must maintain a model of the current state of the world, as well as a model of what each of the other agents takes to be the state of the world. These days, agents are also expected to factor in the probabilities that their models are accurate. On the basis of those probabilities, they have to decide whether or not to modify their behaviors.
Autonomous Robot Will Iron Your Clothes
Columbia University researchers have created a robotic system that detects wrinkles and then irons the piece of cloth autonomously.
Their paper highlights the ironing process is the final step needed in their “pipeline” of a robot picking up a wrinkled shirt, then laying it on the table and lastly, folding the shirt with robotic arms.
A GeForce GTX 770 GPU was used for their “wrinkle analysis algorithm” which analyzes the cloth’s surface using two surface scan techniques: a curvature scan that uses a Kinect depth sensor to estimate the height deviation of the cloth surface, and a discontinuity scan that uses a Kinect RGB camera to detect wrinkles.

Autonomous Robot Will Iron Your Clothes
Their solution was a success – check out their video below.
Deep Learning Helps Robot Learn to Walk the Way Humans Do
University of California, Berkeley researchers are using deep learning and NVIDIA GPUs to create a new generation of robots that adapt to changing environments and new situations without a human reprogramming them.
Their robot “Darwin” learned how to keep his balance on an uneven surface – and GPUs were essential for learning of this complexity.
“If we did the training on CPU, it would have required a week. With a GPU, it ended up taking three hours,” said Igor Mordatch, who is now using GPUs hosted in the Amazon Web Services cloud.
This type of humanoid robots could one day tackle dangerous tasks like handling rescue efforts or cleaning up disaster areas.
Deep Learning for Computer Vision with MATLAB and cuDNN
Deep learning is becoming ubiquitous. With recent advancements in deep learning algorithms and GPU technology, we are able to solve problems once considered impossible in fields such as computer vision, natural language processing, and robotics.
Deep learning uses deep neural networks which have been around for a few decades; what’s changed in recent years is the availability of large labeled datasets and powerful GPUs. Neural networks are inherently parallel algorithms and GPUs with thousands of cores can take advantage of this parallelism to dramatically reduce computation time needed for training deep learning networks. In this post, I will discuss how you can use MATLAB to develop an object recognition system using deep convolutional neural networks and GPUs.
Why Deep Learning for Computer Vision?
Machine learning techniques use data (images, signals, text) to train a machine (or model) to perform a task such as image classification, object detection, or language translation. Classical machine learning techniques are still being used to solve challenging image classification problems. However, they don’t work well when applied directly to images, because they ignore the structure and compositional nature of images. Until recently, state-of-the-art techniques made use of feature extraction algorithms that extract interesting parts of an image as compact low-dimensional feature vectors. These were then used along with traditional machine learning algorithms.
Enter Deep learning. Deep convolutional neural networks (CNNs), a specific type of deep learning algorithm, address the gaps in traditional machine learning techniques, changing the way we solve these problems. CNNs not only perform classification, but they can also learn to extract features directly from raw images, eliminating the need for manual feature extraction. For computer vision applications you often need more than just image classification; you need state-of-the-art computer vision techniques for object detection, a bit of domain expertise, and the know-how to set up and use GPUs efficiently. Through the rest of this post, I will use an object recognition example to illustrate how easy it is to use MATLAB for deep learning, even if you don’t have extensive knowledge of computer vision or GPU programming.
Example: Object Detection and Recognition
The goal in this example is to detect a pet in a video and correctly label the pet as a cat or a dog. To run this example, you will need MATLAB®, Parallel Computing Toolbox™, Computer Vision System Toolbox™ and Statistics and Machine Learning Toolbox™. If you don’t have these tools, request a trial at www.mathworks.com/trial. For this problem I used an NVIDIA Tesla K40 GPU; you can run it on any MATLAB compatible CUDA-enabled NVIDIA GPU.
Our approach involves two steps:
- Object Detection: “Where is the pet in the video?”
- Object Recognition: “Now that I know where it is, is it a cat or a dog?”
Figure 1 shows what the final result looks like.
Using a Pretrained CNN Classifier
The first step is to train a classifier that can classify images of cats and dogs. I could either:
- Collect a massive amount of cropped, resized and labeled images of cats and dogs in a reasonable amount of time (good luck!), or
- Use a model that has already been trained on a variety of common objects and adapt it for my problem.

For this example, I’m going to go with option (2) which is common in practice. To do that I’m going to first start with a pretrained CNN classifier that has been trained on the ImageNet dataset.
I will be using MatConvNet, a CNN package for MATLAB that uses the NVIDIA cuDNN library for accelerated training and prediction. [To learn more about cuDNN, see this Parallel Forall post.] Download and install instructions for MatConvNet are available on its home page. Once I’ve installed MatConvNet on my computer, I can use the following MATLAB code to download and make predictions using the pretrained CNN classifier. Note: I also use the cnnPredict()
helper function, which I’ve made available on Github.
%% Download and predict using a pretrained ImageNet model % Setup MatConvNet run(fullfile('matconvnet-1.0-beta15','matlab','vl_setupnn.m')); % Download ImageNet model from MatConvNet pretrained networks repository urlwrite('http://www.vlfeat.org/matconvnet/models/imagenet-vgg-f.mat', 'imagenet-vgg-f.mat'); cnnModel.net = load('imagenet-vgg-f.mat'); % Load and display an example image imshow('dog_example.png'); img = imread('dog_example.png'); % Predict label using ImageNet trained vgg-f CNN model label = cnnPredict(cnnModel,img); title(label,'FontSize',20)
The pretrained CNN classifier works great out of the box at object classification. The CNN model is able to tell me that there is a beagle in the example image (Figure 2). While this is certainly a great starting point, our problem is a little different. I want to be able to (1) put a box around where the pet is (object detection) and then (2) label it accurately as a dog or a cat (classification). Let’s start by building a dog vs cat classifier from the pretrained CNN model.
Training a Dog vs. Cat Classifier
The objective is simple. I want to solve a simple classification task: given an image I’d like to train a classifier that can accurately tell me if it’s an image of a dog or a cat. I can do that easily with this pretrained classifier and a few dog and cat images.
To get a small collection of labeled images for this project, I went around my office asking colleagues to send me pictures of their pets. I segregated the images and put them into separate ‘cat’ and ‘dog’ folders under a parent called ‘pet_images’. The advantage of using this folder structure is that the imageSet
function can automatically manage image locations and labels. I loaded them all into MATLAB using the following code.
%% Load images from folder % Use imageSet to load images stored in pet_images folder imset = imageSet('pet_images','recursive'); % Preallocate arrays with fixed size for prediction imageSize = cnnModel.net.normalization.imageSize; trainingImages = zeros([imageSize sum([imset(:).Count])],'single'); % Load and resize images for prediction for ii = 1:numel(imset) for jj = 1:imset(ii).Count trainingImages(:,:,:,jj) = imresize(single(read(imset(ii),jj)),imageSize(1:2)); end end % Get the image labels trainingLabels = getImageLabels(imset); summary(trainingLabels) % Display class label distribution
Feature Extraction using a CNN
What I’d like to do next is use this new dataset along with the pretrained ImageNet to extract features. As I mentioned earlier, CNNs can learn to extract generic features from images. These features can be used to train a new classifier to solve a different problem, like classifying cats and dogs in our problem.
CNN algorithms are compute-intensive and can be slow to run. Since they are inherently parallel algorithms, I can use GPUs to speed up the computation. Here is the code that performs the feature extraction using the pretrained model, and a comparison of multithreaded CPU (Intel Core i7-3770 CPU) and GPU (NVIDIA Tesla K40 GPU) implementations.
%% Extract features using pretrained CNN % Depending on how much memory you have on your GPU you may use a larger % batch size. I have 400 images, so I choose 200 as my batch size cnnModel.info.opts.batchSize = 200; % Make prediction on a CPU [~, cnnFeatures, timeCPU] = cnnPredict(cnnModel,trainingImages,'UseGPU',false); % Make prediction on a GPU [~, cnnFeatures, timeGPU] = cnnPredict(cnnModel,trainingImages,'UseGPU',true); % Compare the performance increase bar([sum(timeCPU),sum(timeGPU)],0.5) title(sprintf('Approximate speedup: %2.00f x ',sum(timeCPU)/sum(timeGPU))) set(gca,'XTickLabel',{'CPU','GPU'},'FontSize',18) ylabel('Time(sec)'), grid on, grid minor


As you can see the performance boost you get from using a GPU is significant, about 15x for this feature extraction problem.
The function cnnPredict
is a wrapper around MatConvNet’s vl_simplenn
predict function. The highlighted line of code in Figure 5 is the only modification you need to make to run the prediction on a GPU. Functions like gpuArray
in the Parallel Computing Toolbox make it easy to prototype your algorithms using a CPU and quickly switch to GPUs with minimal code changes.

Train a Classifier Using CNN Features
With the features I extracted in the previous step, I’m now ready to train a “shallow” classifier. To train and compare multiple models interactively, I can use the Classification Learner app in the Statistics and Machine Learning Toolbox. Note: for an introduction to machine learning and classification workflows in MATLAB, check out this Machine Learning Made Easy webinar.
Next, I will directly train an SVM classifier using the extracted features by calling the fitcsvm
function using cnnFeatures
as the input or predictors and trainingLabels
as the output or response values. I will also cross-validate the classifier to test its validation accuracy. The validation accuracy is an unbiased estimate of how the classifier would perform in practice on unseen data.
%% Train a classifier using extracted features % Here I train a linear support vector machine (SVM) classifier. svmmdl = fitcsvm(cnnFeatures,trainingLabels); % Perform crossvalidation and check accuracy cvmdl = crossval(svmmdl,'KFold',10); fprintf('kFold CV accuracy: %2.2f\n',1-cvmdl.kfoldLoss)
svmmdl
is my classifier that I can now use to classify an image as a cat or a dog.
Object Detection
Most images and videos frames have a lot going on in them. In addition to a dog, there may be a tree or a raccoon chasing the dog. Even with a great image classifier, like the one I built in the previous step, it will only work well if I can locate the object of interest in an image (dog or cat), crop the object and then feed it to a classifier. The step of locating the object is called object detection.
For object detection, I will use a technique called Optical Flow that uses the motion of pixels in a video from frame to frame. Figure 6 shows a single frame of video with the motion vectors overlaid.

The next step in the detection process is to separate out pixels that are moving, and then use the Image Region Analyzer app to analyze the connected components in the binary image to filter out the noisy pixels caused by the camera motion. The output of the app is a MATLAB function (I’m going to call it findPet) that can locate where the pet is in the field of view.
Tying the Workflow Together
I now have all the pieces I need to build a pet detection and recognition system.
To quickly recap, I can:
- Detect the location of the pet in new images;
- Crop the pet from the image and extract features using a pretrained CNN;
- Classify the features using an SVM classifier.
Pet Detection and Recognition
Tying all these pieces together, the following code shows my complete MATLAB pet detection and recognition system.
%% Tying the workflow together vr = VideoReader(fullfile('PetVideos','videoExample.mov')); vw = VideoWriter('test.avi','Motion JPEG AVI'); opticFlow = opticalFlowFarneback; open(vw); while hasFrame(vr) % Count frames frameNumber = frameNumber + 1; % Step 1. Read Frame videoFrame = readFrame(vr); % Step 2. Detect ROI vFrame = imresize(videoFrame,0.25); % Get video frame frameGray = rgb2gray(vFrame); % Convert to gray for detection bboxes = findPet(frameGray,opticFlow); % Find bounding boxes if ~isempty(bboxes) img = zeros([imageSize size(bboxes,1)]); for ii = 1:size(bboxes,1) img(:,:,:,ii) = imresize(imcrop(videoFrame,bboxes(ii,:)),imageSize(1:2)); end % Step 3. Recognize object % (a) Extract features using a CNN [~, scores] = cnnPredict(cnnModel,img,'UseGPU',true,'display',false); % (b) Predict using the trained SVM Classifier label = predict(svmmdl,scores); % Step 4. Annotate object videoFrame = insertObjectAnnotation(videoFrame,'Rectangle',bboxes,cellstr(label),'FontSize',40); end % Step 5. Write video to file writeVideo(vw,videoFrame); fprintf('Frames processed: %d of %d\n',frameNumber,ceil(vr.FrameRate*vr.Duration)); end close(vw);
Conclusion
Solutions to real-world computer vision problems often require tradeoffs depending on your application: performance, accuracy, and simplicity of the solution. Advances in techniques such as deep learning have significantly raised the bar in terms of the accuracy of tasks like visual recognition, but the performance costs were too significant for mainstream adoption. GPU technology has closed this gap by accelerating training and prediction speeds by orders of magnitude.
MATLAB makes computer vision with deep learning much more accessible. The combination of an easy-to-use application and programming environment, a complete library of standard computer vision and machine learning algorithms, and tightly integrated support for CUDA-enabled GPUs makes MATLAB an ideal platform for designing and prototyping computer vision solutions.
AI invasion will allow workers to empathise

Jobs for the bots: robots will take on mundane work, enabling humans to focus on interpersonal tasks.
There’s a clue to the future of work in the relief you feel when your phone call to a big corporation is answered by, of all things, a human.
It makes sense. People are replete with empathy and compassion, like to solve problems and enjoy communicating through stories. And these profoundly human traits are the areas where artificial intelligence (AI) trails humans. Because they are our strengths they point to the future of the office and to our workplace relationships with robots and AI.
In the future, people will spend more time dealing with other people rather than investing their energy in spreadsheets, machinery and computer screens. Rote decision making, repetitive tasks and data management will be owned by our silicon-chip workmates.
You can already glimpse this labour allocation in action – there are accounting apps that extract the information from photographs of receipts and automatically compile end-of-month reports. Meanwhile, the accelerating capability of AI to understand spoken human language will cause immense disruption. “Will we ultimately be able to replace most telephone operators? Yes,” says Paul Murphy, chief executive of voice technology company Clarify.io. “In fact I’d say speech recognition and understanding has the potential to eliminate any job where the role of the human is that of intermediary.”
Meanwhile, we will be employed to tell stories, empathise, see the big picture, solve complex problems and adapt fast to changing situations.
Rather than displacing humans, AI will augment human strengths. This will lead to the invention of new roles, which fall into three categories.
• Thinking differently
AI and robots excel at following pre-set rules. People will thrive when they learn to harness machines for data insights, which they can use for problem-solving and innovation. An architect, for example, will be able to work much faster than today because of the range of technologies available, such as augmented reality visualisation and virtual reality headsets. But providing a solution that fits within the constraints of space, planning restrictions, budget and aesthetic style would be nigh-on impossible to automate.
• Thinking bigger
Computers can’t see the context, connection and patterns that humans can, despite crunching vast amounts of data at speed. For example, an automated ad-buying program might be brilliant at buying online advertising space for the right audience at the right price, but it might fail to realise that the day after an air accident would be the wrong day to advertise certain products or certain taglines. The future will involve people who oversee machine decision-making.
• Social interaction
The analytical powers of robots enable them to suggest decisions in healthcare, financial investment and other areas based on huge quantities of data. IBM’s Watson computer, for example, can monitor a vast array of data inputs to identify possible medical problems and propose courses of treatment. But the communication of advice and the contextualised understanding of the best course of action for a specific patient is best handled by humans. As with medicine, so with finance: the role of the specialist human will be to mediate between the wonders of automation and the needs and desires of the patient or customer.
It’s happening: ‘Pepper’ robot gains emotional intelligence

‘Masayoshi Son, Pepper’ robot gains emotional intelligence
Last week we weighed in on the rise of robotica aka sexbots, noting that improvements in emotion and speech recognition would likely spur development in this field. Now a new offering from Softbank promises to be just such a game changer, equipping robots with the technology necessary to interact with humans in a social settings. The robot is called Pepper, and it is being launched at an exorbitant cost by its makers Softbank and Aldabaran.
Pepper is being billed as the first “emotionally intelligent” robot. While it can’t wash your floors or take out the trash, it may just decompress your next domestic row with a witty remark or well-timed turn of phrase. It accomplishes such feats through the use of novel emotion recognition techniques. Emotion recognition may seem like a strange, and perhaps unnecessary, skill for a robot. However, it will be a crucial one if machines are ever able to make the leap from the factory worker to domestic caregiver.
Even in humans, emotion recognition can be devilishly difficult to achieve. Those afflicted with autism represent a portion of humanity that has been referred to as “emotion-blind” due to the difficulty they have in reading expressions. In many ways, robots have hitherto occupied similar territory. While Softbank hasn’t revealed the exact proprietary algorithms used to achieve emotion recognition, the smart money is on some form of deep neural network.
To date, most attempts at emotion recognition have employed a branch of artificial intelligence called machine learning, in which training data, most often labeled, is fed into an algorithm that uses statistical techniques to “recognize” characteristics that set the examples apart. It’s likely that Pepper uses a variation on this, employing algorithms trained on thousands of labeled photographs or videos to learn what combination of pixels represent a smiling face versus a startled or angry one.
Pepper is also connected to the cloud, feeding data from its sensors to server clusters, where the lion’s share of processing will take place. This should allow their emotion recognition algorithms to improve over time, as repeated use provides fresh training examples. A similar method enabled Google’s speech recognition system to overtake so many others in the field. Every time someone uses the system and corrects a misapprehended word, they provide a new training example for the AI to improve its performance. In the case of a massive search system like Google’s, training examples add up very quickly.
This may explain why Softbank is willing to go ahead with the launch of Pepper despite the financials indicating it will be a loss-making venture. If rather than optimizing profit, they are using Pepper as a means towards perfecting emotion recognition, than this may be part of a larger play to gain superior intellectual property. If that’s the case, then it probably won’t be long before we see other tech giants wading into the arena, offering new and competitive variations on Pepper.
While it may seem strange to think of our emotions as being a lucrative commodity, commanding millions of tech dollars and vied for by sleek-looking robots, such a reality could well be in store.
-
Resume Full name Sayed Ahmadreza Razian Age 38 (Sep 1982) Website ahmadrezarazian.ir Email ahmadrezarazian@gmail.com...
-
Shangul Mangul HabeAngur Shangul Mangul HabeAngur (City of Goats) is a game for child (4-8 years). they learn how be useful in the city and respect to people. Persian n...
-
Drowning Detection by Image Processing In this research, I design an algorithm for image processing of a swimmer in pool. This algorithm diagnostics the swimmer status. Every time graph sho...
-
معرفی نام و نام خانوادگی سید احمدرضا رضیان پست الکترونیکی ahmadrezarazian@gmail.com درجات علمی کارشناسی : ریاضی کاربردی – دانشگاه اصفهان (معدل 14...
-
Tianchi-The Purchase and Redemption Forecasts 2015 Special Prize – Tianchi Golden Competition (2015) “The Purchase and Redemption Forecasts” in Big data (Alibaba Group) Among 4868 teams. Introd...
-
Nokte – نکته نرم افزار کاربردی نکته نسخه 1.0.8 (رایگان) نرم افزار نکته جهت یادداشت برداری سریع در میزکار ویندوز با قابلیت ذخیره سازی خودکار با پنل ساده و کم ح...
-
Tianchi-Brick and Mortar Store Recommendation with Budget Constraints Ranked 5th – Tianchi Competition (2016) “Brick and Mortar Store Recommendation with Budget Constraints” (IJCAI Socinf 2016-New York,USA)(Alibaba Group...
-
1st National Conference on Computer Games-Challenges and Opportunities 2016 According to the public relations and information center of the presidency vice presidency for science and technology affairs, the University of Isfah...
-
Optimizing raytracing algorithm using CUDA Abstract Now, there are many codes to generate images using raytracing algorithm, which can run on CPU or GPU in single or multi-thread methods. In t...
-
2nd Symposium on psychological disorders in children and adolescents 2016 2nd Symposium on psychological disorders in children and adolescents 2016 Faculty of Nursing and Midwifery – University of Isfahan – 2 Aug 2016 - Ass...
-
My City This game is a city simulation in 3d view. Gamer must progress the city and create building for people. This game is simular the Simcity.
-
ببین و بپر به زودی.... لینک صفحه : http://bebinbepar.ir
-
Environmental Education Software In this game , Kids learn that They must respect to Nature and the Environment. This game was created in 3d . 600x420 (0x0) 66.45 KB ...
-
SVM Review On this review, i compare 4 papers about 4 famous models by SVM. These models are : Maximum Likelihood Classification (ML) Backpropagatio...
-
Watching Jumping Coming Soon... Visit at : http://bebinbepar.ir/
- AMD Ryzen Downcore Control AMD Ryzen 7 processors comes with a nice feature: the downcore control. This feature allows to enable / disabl...
- Fallout 4 Patch 1.3 Adds NVIDIA HBAO+ and FleX-Powered Weapon Debris Fallout 4 launched last November to record player numbers, swiftly becoming the most popular third-party game...
- Detecting and Labeling Diseases in Chest X-Rays with Deep Learning Researchers from the National Institutes of Health in Bethesda, Maryland are using NVIDIA GPUs and deep learni...
- NVIDIA TITAN Xp vs TITAN X NVIDIA has more or less silently launched a new high end graphics card around 10 days ago. Here are some pictu...
- Automatic Colorization Automatic Colorization of Grayscale Images Researchers from the Toyota Technological Institute at Chicago and University of Chicago developed a fully aut...
- Back to Dinosaur Island Back to Dinosaur Island takes advantage of 15 years of CRYENGINE development to show users the sheer endless p...
- Unity – What’s new in Unity 5.3.3 The Unity 5.3.3 public release brings you a few improvements and a large number of fixes. Read the release not...
- ASUS GeForce GTX 1080 TURBO Review This GTX 1080 TURBO is the simplest GTX 1080 I tested. By simplest, I mean the graphics card comes with a simp...
- کودا – CUDA کودا به انگلیسی (CUDA) که مخفف عبارت انگلیسی Compute Unified Device Architecture است یک سکوی پردازش موازی و مد...
- Assisting Farmers with Artificial Intelligence With our planet getting warmer and warmer, and carbon dioxide levels steadily creeping up, companies are using...
- Diagnosing Cancer with Deep Learning and GPUs Using GPU-accelerated deep learning, researchers at The Chinese University of Hong Kong pushed the boundaries...
- Diablo Meets Dark Souls in Isometric Action-RPG Eitr Among the indie games Sony showcased during its E3 press conference this week, Eitr was what most stood out to...
- Virtual Reality in the Military Virtual reality has been adopted by the military – this includes all three services (army, navy and air force)...
- Head-mounted Displays (HMD) Head-mounted displays or HMDs are probably the most instantly recognizable objects associated with virtual rea...
- GPU-Accelerated Model Reveals Details of Nuclear FissionScientists from University of Washington, Warsaw University of Technology in …
- E3 2016 Showcases More Than 2,300 ProductsE3 2016 once again delivered as the premier place to …
- ASUS GeForce GTX 1080 TURBO ReviewThis GTX 1080 TURBO is the simplest GTX 1080 I …
- Can You Get Your MBA Using A Video Game?A business strategy learning game can be just as effective …
- Together mode in Microsoft Teams – SkypeTogether mode is a new meeting experience in Teams that …
- Unravel review: Eyes full of candy, head full of yarnNetflix makes secret “test” films. I discovered this a few …
- NVIDIA VR Ready Program Points Pros to a Great Virtual Reality ExperienceVirtual reality isn’t just at the heart of a new …
- New trailer introduces the historical figures of Assassin’s Creed: SyndicateLast month we got a look at the gang of …
- GeForce GTX 980 Notebooks a VR Developer’s DreamVirtual reality takes immense amounts of computing horsepower. Creating VR …
- Performance Portability from GPUs to CPUs with OpenACCOpenACC gives scientists and researchers a simple and powerful way to …