Personal Profile

Artificial Intelligence

Diagnosing Cancer with Deep Learning and GPUs

Diagnosing Cancer with Deep Learning and GPUs

Diagnosing Cancer with Deep Learning and GPUs

Using GPU-accelerated deep learning, researchers at The Chinese University of Hong Kong pushed the boundaries of cancer image analysis in a way that could one day save physicians and patients precious time.

The team used a TITAN X GPU to win the 2015 Gland Segmentation Challenge held at the Medical Image Computing and Computer conference, the world’s leading conference on medical imaging.

Traditionally, pathologists diagnose cancer by looking for abnormalities in tumor tissue and cells under a microscope, but it’s a time-consuming process that is open to error.

An overview of the team’s proposed framework

An overview of the team’s proposed framework

The research team trained their deep convolutional neural network on a set of images of known abnormalities. They then used this training for segmenting individual glands from tissues to make it easier to distinguish individual cells, determine their size, shape and location relative to other cells. By calculating these measurements, pathologists can determine the likelihood of malignancy.

“Training with GPUs was 100 times faster than with CPUs,” said Hao Chen, a third-year Ph.D. student and member of the team that developed the solution. “That speed is going to become even more important as we advance our work.”

New GPU Computing Model for Artificial Intelligence

New GPU Computing Model for Artificial Intelligence

New GPU Computing Model for Artificial Intelligence

Yann LeCun, Director of Facebook AI Research, invited NVIDIA CEO Jen-Hsun Huang to speak at “The Future of AI” symposium at NYU, where industry leaders discussed the state of AI and its continued advancement.

Jen-Hsun published a blog on his talk that coverstopics such as how deep learning is a new software model that needs a new computing model; why AI researchers have adopted GPU-accelerated computing; and NVIDIA’s ongoing efforts to advance AI as we enter into its exponential adoption. And why, after all these years, AI has taken off.

In just two years, the number of companies NVIDIA collaborates with on deep learning has jumped nearly 35x to over 3,400 companies.

In just two years, the number of companies NVIDIA collaborates with on deep learning has jumped nearly 35x to over 3,400 companies.

Deep Learning for Computer Vision with MATLAB and cuDNN

Deep Learning for Computer Vision with MATLAB and cuDNN

Deep Learning for Computer Vision with MATLAB and cuDNN

Deep learning is becoming ubiquitous. With recent advancements in deep learning algorithms and GPU technology, we are able to solve problems once considered impossible in fields such as computer vision, natural language processing, and robotics.

Deep learning uses deep neural networks which have been around for a few decades; what’s changed in recent years is the availability of large labeled datasets and powerful GPUs. Neural networks are inherently parallel algorithms and GPUs with thousands of cores can take advantage of this parallelism to dramatically reduce computation time needed for training deep learning networks. In this post, I will discuss how you can use MATLAB to develop an object recognition system using deep convolutional neural networks and GPUs.

Pet detection and recognition system.

Pet detection and recognition system.

Why Deep Learning for Computer Vision?

Machine learning techniques use data (images, signals, text) to train a machine (or model) to perform a task such as image classification, object detection, or language translation. Classical machine learning techniques are still being used to solve challenging image classification problems. However, they don’t work well when applied directly to images, because they ignore the structure and compositional nature of images. Until recently, state-of-the-art techniques made use of feature extraction algorithms that extract interesting parts of an image as compact low-dimensional feature vectors. These were then used along with traditional machine learning algorithms.

Enter Deep learning. Deep convolutional neural networks (CNNs), a specific type of deep learning algorithm, address the gaps in traditional machine learning techniques, changing the way we solve these problems. CNNs not only perform classification, but they can also learn to extract features directly from raw images, eliminating the need for manual feature extraction. For computer vision applications you often need more than just image classification; you need state-of-the-art computer vision techniques for object detection, a bit of domain expertise, and the know-how to set up and use GPUs efficiently. Through the rest of this post, I will use an object recognition example to illustrate how easy it is to use MATLAB for deep learning, even if you don’t have extensive knowledge of computer vision or GPU programming.

Example: Object Detection and Recognition

The goal in this example is to detect a pet in a video and correctly label the pet as a cat or a dog. To run this example, you will need MATLAB®, Parallel Computing Toolbox™, Computer Vision System Toolbox™ and Statistics and Machine Learning Toolbox™. If you don’t have these tools, request a trial at www.mathworks.com/trial. For this problem I used an NVIDIA Tesla K40 GPU; you can run it on any MATLAB compatible CUDA-enabled NVIDIA GPU.

Our approach involves two steps:

  1. Object Detection: “Where is the pet in the video?”
  2. Object Recognition: “Now that I know where it is, is it a cat or a dog?”

Figure 1 shows what the final result looks like.

Using a Pretrained CNN Classifier

The first step is to train a classifier that can classify images of cats and dogs. I could either:

  1. Collect a massive amount of cropped, resized and labeled images of cats and dogs in a reasonable amount of time (good luck!), or
  2. Use a model that has already been trained on a variety of common objects and adapt it for my problem.
Figure 2: Pretrained ImageNet model classifying the image of the dog as 'beagle'.
Figure 2: Pretrained ImageNet model classifying the image of the dog as ‘beagle’.

For this example, I’m going to go with option (2) which is common in practice. To do that I’m going to first start with a pretrained CNN classifier that has been trained on the ImageNet dataset.

I will be using MatConvNet, a CNN package for MATLAB that uses the NVIDIA cuDNN library for accelerated training and prediction. [To learn more about cuDNN, see this Parallel Forall post.] Download and install instructions for MatConvNet are available on its home page. Once I’ve installed MatConvNet on my computer, I can use the following MATLAB code to download and make predictions using the pretrained CNN classifier. Note: I also use the cnnPredict() helper function, which I’ve made available on Github.

%% Download and predict using a pretrained ImageNet model

% Setup MatConvNet
run(fullfile('matconvnet-1.0-beta15','matlab','vl_setupnn.m'));

% Download ImageNet model from MatConvNet pretrained networks repository
urlwrite('http://www.vlfeat.org/matconvnet/models/imagenet-vgg-f.mat', 'imagenet-vgg-f.mat');
cnnModel.net = load('imagenet-vgg-f.mat');

% Load and display an example image
imshow('dog_example.png');
img = imread('dog_example.png');

% Predict label using ImageNet trained vgg-f CNN model
label = cnnPredict(cnnModel,img);
title(label,'FontSize',20)

The pretrained CNN classifier works great out of the box at object classification. The CNN model is able to tell me that there is a beagle in the example image (Figure 2). While this is certainly a great starting point, our problem is a little different. I want to be able to (1) put a box around where the pet is (object detection) and then (2) label it accurately as a dog or a cat (classification). Let’s start by building a dog vs cat classifier from the pretrained CNN model.

Training a Dog vs. Cat Classifier

The objective is simple. I want to solve a simple classification task: given an image I’d like to train a classifier that can accurately tell me if it’s an image of a dog or a cat. I can do that easily with this pretrained classifier and a few dog and cat images.

To get a small collection of labeled images for this project, I went around my office asking colleagues to send me pictures of their pets. I segregated the images and put them into separate ‘cat’ and ‘dog’ folders under a parent called ‘pet_images’. The advantage of using this folder structure is that the imageSet function can automatically manage image locations and labels. I loaded them all into MATLAB using the following code.

%% Load images from folder
% Use imageSet to load images stored in pet_images folder
imset = imageSet('pet_images','recursive');

% Preallocate arrays with fixed size for prediction
imageSize = cnnModel.net.normalization.imageSize;
trainingImages = zeros([imageSize sum([imset(:).Count])],'single');

% Load and resize images for prediction
for ii = 1:numel(imset)
  for jj = 1:imset(ii).Count
      trainingImages(:,:,:,jj) = imresize(single(read(imset(ii),jj)),imageSize(1:2));
  end
end

% Get the image labels
trainingLabels = getImageLabels(imset);
summary(trainingLabels) % Display class label distribution

Feature Extraction using a CNN

What I’d like to do next is use this new dataset along with the pretrained ImageNet to extract features. As I mentioned earlier, CNNs can learn to extract generic features from images. These features can be used to train a new classifier to solve a different problem, like classifying cats and dogs in our problem.

CNN algorithms are compute-intensive and can be slow to run. Since they are inherently parallel algorithms, I can use GPUs to speed up the computation. Here is the code that performs the feature extraction using the pretrained model, and a comparison of multithreaded CPU (Intel Core i7-3770 CPU) and GPU (NVIDIA Tesla K40 GPU) implementations.

%% Extract features using pretrained CNN

% Depending on how much memory you have on your GPU you may use a larger
% batch size. I have 400 images, so I choose 200 as my batch size
cnnModel.info.opts.batchSize = 200;

% Make prediction on a CPU
[~, cnnFeatures, timeCPU] = cnnPredict(cnnModel,trainingImages,'UseGPU',false);
% Make prediction on a GPU
[~, cnnFeatures, timeGPU] = cnnPredict(cnnModel,trainingImages,'UseGPU',true);

% Compare the performance increase
bar([sum(timeCPU),sum(timeGPU)],0.5)
title(sprintf('Approximate speedup: %2.00f x ',sum(timeCPU)/sum(timeGPU)))
set(gca,'XTickLabel',{'CPU','GPU'},'FontSize',18)
ylabel('Time(sec)'), grid on, grid minor
Figure 3: Comparision of execution times for feature extraction using a CPU (left) and NVIDIA Tesla K40 GPU (right).
Figure 3: Comparision of execution times for feature extraction using a CPU (left) and NVIDIA Tesla K40 GPU (right).
Figure 4: The CPU and GPU time required to extract features from 1128 images.
Figure 4: The CPU and GPU time required to extract features from 1128 images.

As you can see the performance boost you get from using a GPU is significant, about 15x for this feature extraction problem.

The function cnnPredict is a wrapper around MatConvNet’s vl_simplenn predict function. The highlighted line of code in Figure 5 is the only modification you need to make to run the prediction on a GPU. Functions like gpuArray in the Parallel Computing Toolbox make it easy to prototype your algorithms using a CPU and quickly switch to GPUs with minimal code changes.

Figure 5: The `gpuArray` and `gather` functions allow you to transfer data from the MATLAB workspace to the GPU and back.
Figure 5: The `gpuArray` and `gather` functions allow you to transfer data from the MATLAB workspace to the GPU and back.

Train a Classifier Using CNN Features

With the features I extracted in the previous step, I’m now ready to train a “shallow” classifier. To train and compare multiple models interactively, I can use the Classification Learner app in the Statistics and Machine Learning Toolbox. Note: for an introduction to machine learning and classification workflows in MATLAB, check out this Machine Learning Made Easy webinar.

Next, I will directly train an SVM classifier using the extracted features by calling the fitcsvm function using cnnFeatures as the input or predictors and trainingLabels as the output or response values. I will also cross-validate the classifier to test its validation accuracy. The validation accuracy is an unbiased estimate of how the classifier would perform in practice on unseen data.

%% Train a classifier using extracted features

% Here I train a linear support vector machine (SVM) classifier.
svmmdl = fitcsvm(cnnFeatures,trainingLabels);

% Perform crossvalidation and check accuracy
cvmdl = crossval(svmmdl,'KFold',10);
fprintf('kFold CV accuracy: %2.2f\n',1-cvmdl.kfoldLoss)

svmmdl is my classifier that I can now use to classify an image as a cat or a dog.

Object Detection

Most images and videos frames have a lot going on in them. In addition to a dog, there may be a tree or a raccoon chasing the dog. Even with a great image classifier, like the one I built in the previous step, it will only work well if I can locate the object of interest in an image (dog or cat), crop the object and then feed it to a classifier. The step of locating the object is called object detection.

For object detection, I will use a technique called Optical Flow that uses the motion of pixels in a video from frame to frame. Figure 6 shows a single frame of video with the motion vectors overlaid.

Figure 6: A single frame of video with motion vectors overlaid (left) and magnitude of the motion vectors (right).
Figure 6: A single frame of video with motion vectors overlaid (left) and magnitude of the motion vectors (right).

The next step in the detection process is to separate out pixels that are moving, and then use the Image Region Analyzer app to analyze the connected components in the binary image to filter out the noisy pixels caused by the camera motion. The output of the app is a MATLAB function (I’m going to call it findPet) that can locate where the pet is in the field of view.

Tying the Workflow Together

I now have all the pieces I need to build a pet detection and recognition system.

To quickly recap, I can:

  • Detect the location of the pet in new images;
  • Crop the pet from the image and extract features using a pretrained CNN;
  • Classify the features using an SVM classifier.

Pet Detection and Recognition

Tying all these pieces together, the following code shows my complete MATLAB pet detection and recognition system.

%% Tying the workflow together
vr = VideoReader(fullfile('PetVideos','videoExample.mov'));
vw = VideoWriter('test.avi','Motion JPEG AVI');
opticFlow = opticalFlowFarneback;
open(vw);

while hasFrame(vr)
% Count frames
frameNumber = frameNumber + 1;

% Step 1. Read Frame
videoFrame = readFrame(vr);

% Step 2. Detect ROI
vFrame = imresize(videoFrame,0.25); % Get video frame
frameGray = rgb2gray(vFrame); % Convert to gray for detection
bboxes = findPet(frameGray,opticFlow); % Find bounding boxes
if ~isempty(bboxes)
img = zeros([imageSize size(bboxes,1)]);
for ii = 1:size(bboxes,1)
img(:,:,:,ii) = imresize(imcrop(videoFrame,bboxes(ii,:)),imageSize(1:2));
end

% Step 3. Recognize object
% (a) Extract features using a CNN
[~, scores] = cnnPredict(cnnModel,img,'UseGPU',true,'display',false);

% (b) Predict using the trained SVM Classifier
label = predict(svmmdl,scores);

% Step 4. Annotate object
videoFrame = insertObjectAnnotation(videoFrame,'Rectangle',bboxes,cellstr(label),'FontSize',40);
end

% Step 5. Write video to file
writeVideo(vw,videoFrame);

fprintf('Frames processed: %d of %d\n',frameNumber,ceil(vr.FrameRate*vr.Duration));
end
close(vw);

Conclusion

Solutions to real-world computer vision problems often require tradeoffs depending on your application: performance, accuracy, and simplicity of the solution. Advances in techniques such as deep learning have significantly raised the bar in terms of the accuracy of tasks like visual recognition, but the performance costs were too significant for mainstream adoption. GPU technology has closed this gap by accelerating training and prediction speeds by orders of magnitude.

MATLAB makes computer vision with deep learning much more accessible. The combination of an easy-to-use application and programming environment, a complete library of standard computer vision and machine learning algorithms, and tightly integrated support for CUDA-enabled GPUs makes MATLAB an ideal platform for designing and prototyping computer vision solutions.

AI invasion will allow workers to empathise

Jobs for the bots: robots will take on mundane work, enabling humans to focus on interpersonal tasks.

Jobs for the bots: robots will take on mundane work, enabling humans to focus on interpersonal tasks.

There’s a clue to the future of work in the relief you feel when your phone call to a big corporation is answered by, of all things, a human.

It makes sense. People are replete with empathy and compassion, like to solve problems and enjoy communicating through stories. And these profoundly human traits are the areas where artificial intelligence (AI) trails humans. Because they are our strengths they point to the future of the office and to our workplace relationships with robots and AI.

In the future, people will spend more time dealing with other people rather than investing their energy in spreadsheets, machinery and computer screens. Rote decision making, repetitive tasks and data management will be owned by our silicon-chip workmates.

You can already glimpse this labour allocation in action – there are accounting apps that extract the information from photographs of receipts and automatically compile end-of-month reports. Meanwhile, the accelerating capability of AI to understand spoken human language will cause immense disruption. “Will we ultimately be able to replace most telephone operators? Yes,” says Paul Murphy, chief executive of voice technology company Clarify.io. “In fact I’d say speech recognition and understanding has the potential to eliminate any job where the role of the human is that of intermediary.”

Meanwhile, we will be employed to tell stories, empathise, see the big picture, solve complex problems and adapt fast to changing situations.

Rather than displacing humans, AI will augment human strengths. This will lead to the invention of new roles, which fall into three categories.

Thinking differently

AI and robots excel at following pre-set rules. People will thrive when they learn to harness machines for data insights, which they can use for problem-solving and innovation. An architect, for example, will be able to work much faster than today because of the range of technologies available, such as augmented reality visualisation and virtual reality headsets. But providing a solution that fits within the constraints of space, planning restrictions, budget and aesthetic style would be nigh-on impossible to automate.

Thinking bigger

Computers can’t see the context, connection and patterns that humans can, despite crunching vast amounts of data at speed. For example, an automated ad-buying program might be brilliant at buying online advertising space for the right audience at the right price, but it might fail to realise that the day after an air accident would be the wrong day to advertise certain products or certain taglines. The future will involve people who oversee machine decision-making.

Social interaction

The analytical powers of robots enable them to suggest decisions in healthcare, financial investment and other areas based on huge quantities of data. IBM’s Watson computer, for example, can monitor a vast array of data inputs to identify possible medical problems and propose courses of treatment. But the communication of advice and the contextualised understanding of the best course of action for a specific patient is best handled by humans. As with medicine, so with finance: the role of the specialist human will be to mediate between the wonders of automation and the needs and desires of the patient or customer.

Artificial intelligence to amplify digital transformation: Vishal Sikka

The digital transformation can best be achieved by adopting automation and artificial intelligence (AI) and the growing symbiosis between Infosys and Oracle is going to help achieve this goal faster than ever, said Infosys CEO Vishal Sikka.

Addressing a gathering of top innovators at Oracle’s OpenWorld 2015 conference here on October 27, Sikka emphasised on how AI can be a great amplifier to simplify and enable existing landscapes as well as build intelligent systems that help us solve our most complex emerging problems.

“The world is looking at providing services in a better way. I observe three major shifts – focus on experience among consumers, emergence of AI and the ultimate cloud phenomenon,” he added.

Sikka also announced that the Infosys Finacle’s core banking solution – running on new and secure Oracle SuperCluster M7 microprocessor – has set a new record for the number of banking transactions processed.

“The solution supported more than two billion bank accounts with near linear scalability. The results showcase Finacle’s capabilities to manage extraordinarily large transaction volumes to help banks cater to their growing business demands at reduced costs,” he said.

The tests were conducted across a mix of delivery channel transactions that could originate from branches, ATMs, online and mobile channels.

According to Ganesh Ramamurthy, senior vice president (product development) at Oracle, the SuperCluster M7 microprocessor and SPARC T7 and M7 systems offer breakthrough technology for memory intrusion protection and encryption.

“Infosys’ latest Finacle results on SuperCluster M7 demonstrate the superior performance, efficiency and security capabilities of SPARC M7 with Oracle Database 12c and WebLogic Server 12c for critical banking functions,” he explained.

According to Sikka, their future strategy will not be completed without the help from Oracle and its diverse portfolio.

“We together are creating a sort of symbiosis. Infosys is emerging as a great change agent and we are collaborating with Oracle in innovations in java,” he said.

He also spoke about AiKiDo – a new offering that comprise three enhanced service offerings in knowledge-based IT (KBIT), platforms and design thinking.

Infosys has deployed a number of systems that replicate human decision-making in areas such as financial service regulation and ticketing of IT issues, thus enabling productivity improvements by up to 40 percent and saving customers millions of dollars annually.

In addition to this, Infosys is working with global clients to use artificial intelligence to address business challenges.

Infosys is utilising artificial intelligence techniques to solve complex engineering problems in design, testing, and certification of complex engineering products.

“I am optimistic that artificial intelligence techniques will help us solve next-generation problems, and that humans will play the most important part in this process,” Dr Sikka added.

Infosys has delivered nearly 30 projects for clients using artificial intelligence. Many of these first projects have been in manufacturing and financial services.

Infosys is currently developing solutions based on artificial intelligence to solve complex problems in the engineering space.

It’s happening: ‘Pepper’ robot gains emotional intelligence

Last week we weighed in on the rise of robotica aka sexbots, noting that improvements in emotion and speech recognition would likely spur development in this field. Now a new offering from Softbank promises to be just such a game changer, equipping robots with the technology necessary to interact with humans in a social settings.  The robot is called Pepper, and it is being launched at an exorbitant cost by its makers Softbank and Aldabaran.

Pepper is being billed as the first “emotionally intelligent” robot. While it can’t wash your floors or take out the trash, it may just decompress your next domestic row with a witty remark or well-timed turn of phrase. It accomplishes such feats through the use of novel emotion recognition techniques. Emotion recognition may seem like a strange, and perhaps unnecessary, skill for a robot. However, it will be a crucial one if machines are ever able to make the leap from the factory worker to domestic caregiver.

Even in humans, emotion recognition can be devilishly difficult to achieve. Those afflicted with autism represent a portion of humanity that has been referred to as “emotion-blind” due to the difficulty they have in reading expressions.  In many ways, robots have hitherto occupied similar territory. While Softbank hasn’t revealed the exact proprietary algorithms used to achieve emotion recognition, the smart money is on some form of deep neural network.

To date, most attempts at emotion recognition have employed a branch of artificial intelligence called machine learning, in which training data, most often labeled, is fed into an algorithm that uses statistical techniques to “recognize” characteristics that set the examples apart. It’s likely that Pepper uses a variation on this, employing algorithms trained on thousands of labeled photographs or videos to learn what combination of pixels represent a smiling face versus a startled or angry one.

Pepper is also connected to the cloud, feeding data from its sensors to server clusters, where the lion’s share of processing will take place.  This should allow their emotion recognition algorithms to improve over time, as repeated use provides fresh training examples. A similar method enabled Google’s speech recognition system to overtake so many others in the field. Every time someone uses the system and corrects a misapprehended word, they provide a new training example for the AI to improve its performance. In the case of a massive search system like Google’s, training examples add up very quickly.

This may explain why Softbank is willing to go ahead with the launch of Pepper despite the financials indicating it will be a loss-making venture. If rather than optimizing profit, they are using Pepper as a means towards perfecting emotion recognition, than this may be part of a larger play to gain superior intellectual property. If that’s the case, then it probably won’t be long before we see other tech giants wading into the arena, offering new and competitive variations on Pepper.

While it may seem strange to think of our emotions as being a lucrative commodity, commanding millions of tech dollars and vied for by sleek-looking robots, such a reality could well be in store.

Microsoft Bing Predicts and the future of gambling

Like an 800 pound gorilla flailing wildly in a Victorian tea house, artificial intelligence has been disrupting one industry after another of late. Now the latest group to feel the burn is the gambling consortiums in Las Vegas. Microsoft’s AI engine, Bing Predicts, made headlines recently by beating the Las Vegas odds in predicting winners for week one of the NFL season. Its previous successes are even more breathtaking, correctly predicting the outcomes of all 15 games in the 2014 Brazil World Cup knockout round and almost all the results of the 2015 Academy Awards, including the winners of best picture, best director, best actor, and best actress. Which is all to say Microsoft’s AI is turning out to be an incredibly good gambler and the ramifications will go well beyond the world of sports betting.

Let’s take a look at how Bing Predicts was able to outwit the best sporting minds in Las Vegas, and in the process, explore how AI is poised to upend the world of professional gambling. The basic principle driving Microsoft’s success at gambling rests on the “wisdom of the crowd.” In regards to predicting NFL winners, not only does the AI algorithm take into account such diverse variables as a team’s previous margins of victory, player statistics (rushing yards and passing yards for example), stadium surfaces, weather conditions, and so on, the secret sauce that seems to give it an edge over the other experts is the ability to quantify aggregate sentiments on the social web.

Walter Sun and the Bing Predicts team at Microsoft

Walter Sun and the Bing Predicts team at Microsoft

By tapping into social media and digesting the opinions of thousands, if not millions, of Twitter and Facebook users, the AI can pick up intangibles that defy even the most hardcore of human statisticians. For instance, the model might detect a rumor among Twitter users that the Patriots starting quarterback just had a fight with his wife in the wee hours before Sunday’s game and hence is less likely to be at the top of his form. While such rumors may prove to be unfounded, they have a core of truth enough of the time that they give the model a statistical advantage. In precise terms, Walter Sun, who heads up the Bing Predicts team, found that analyzing this so-called “wisdom of the crowd” actually increases the accuracy of their predictions by 5%.

While 5% may seem like a small amount, when it comes to beating the Las Vegas odds, if one is consistently beating the experts 5% of the time, that equals a fortune in gambling earnings and a troubling turn of events for Vegas bookies. This raises a real question: can professional sports gambling survive in a world where a Silicon Valley corporation holds the highest card in the deck? But if Vegas bookies think they have a lot to worry about, they are just one among many. A whole slew of industries are essentially gambling houses, and any algorithm that could beat their models would pose a major threat to their very existence.

Notable among these are the fields of insurance and commodities trading. If Microsoft or another one of the Silicon Valley behemoths that are developing cutting edge AI can leverage their advantage in the prediction business to outgun the industry leaders in some of these fields, they wouldn’t have to wait long to achieve supremacy in the market. Brace yourself: We may be headed towards a world dominated by a handful of tech corporations vying with each other to develop the best AI prediction algorithm.

Playing Games Might Help AI Advance

A new company wants to build artificial intelligence through game play.

The “artificial intelligence” found in most computer games isn’t very intelligent at all. Characters in the games tend to be controlled by algorithms that produce patterns of behaviors designed to seem natural and realistic, but the characters are actually rigid, with no capacity to learn or adapt.

One company hopes to come up with something a lot smarter by providing a platform that lets software learn how to behave within a game, whether in response to basic stimuli or to more complex situations. The hope is that this kind of learning will eventually allow complex behavior to emerge in game characters—and make for better AI in a range of applications.

Keen Software, based in the Czech Republic and the U.K., makes several “sandbox” games in which players can construct complex virtual structures and machines using realistic materials and physics. This July, the company spun out a business called GoodAI that aims to develop sophisticated AI using machine learning. Marek Rosa, Keen’s CEO, invested $10 million of his own money in the new company.

GoodAI has released open-source software called Brain Simulator that can be used to train a series of artificial neural networks in how to respond to stimuli from a game environment. Through trial and error, these networks can learn how to play a simple game. And several networks can be chained together to create more complex behavior, making it possible for software to learn how to achieve an objective that may require numerous steps.

The company’s researchers have shown that Brain Simulator can be used to train software to play some simple two-dimensional games. These include Breakout, in which a player bounces a ball off a wall of bricks (which disappear once hit), and a maze game that requires completing a series of different tasks.

The virtual character in the maze game “will start to do some random actions, and will be observing how he is changing the environment, or how it’s changing him,” Rosa says. “While he’s changing the environment, he’s learning all these associations and these patterns.”

Learning associations and patterns happens to be a key goal for AI in general, which is why Rosa hopes to eventually develop forms of artificial intelligence with broad utility beyond games. That’s reminiscent of the approach taken by an AI startup called DeepMind that Google bought last year (see “Google’s AI Masters Space Invaders”).DeepMind is using customized machine-learning approaches to teach software to play various simple games.

AI researchers have long used game play as a way to test artificial-intelligence software, says Roman Yampolskiy, an assistant professor at the University of Louisville. “From checkers to chess to poker and go, some of the greatest accomplishments in AI research have been demonstrated around the game board,” he says. What’s interesting about the approach GoodAI and DeepMind are taking is their computers are not given prior understanding of a game’s rules, he says.

However, it’s still not clear whether the strategy will be useful beyond games. Yampolskiy, who has looked at GoodAI’s software, says that while it is a worthwhile contribution to the field, it may be very hard to use as the basis for a more general-purpose AI.



Popular Pages
  • CV Resume Ahmadrezar Razian-سید احمدرضا رضیان-رزومه Resume Full name Sayed Ahmadreza Razian Nationality Iran Age 36 (Sep 1982) Website ahmadrezarazian.ir  Email ...
  • CV Resume Ahmadrezar Razian-سید احمدرضا رضیان-رزومه معرفی نام و نام خانوادگی سید احمدرضا رضیان محل اقامت ایران - اصفهان سن 33 (متولد 1361) پست الکترونیکی ahmadrezarazian@gmail.com درجات علمی...
  • Nokte feature image Nokte – نکته نرم افزار کاربردی نکته نسخه 1.0.8 (رایگان) نرم افزار نکته جهت یادداشت برداری سریع در میزکار ویندوز با قابلیت ذخیره سازی خودکار با پنل ساده و کم ح...
  • Tianchi-The Purchase and Redemption Forecasts-Big Data-Featured Tianchi-The Purchase and Redemption Forecasts 2015 Special Prize – Tianchi Golden Competition (2015)  “The Purchase and Redemption Forecasts” in Big data (Alibaba Group) Among 4868 teams. Introd...
  • Brick and Mortar Store Recommendation with Budget Constraints-Featured Tianchi-Brick and Mortar Store Recommendation with Budget Constraints Ranked 5th – Tianchi Competition (2016) “Brick and Mortar Store Recommendation with Budget Constraints” (IJCAI Socinf 2016-New York,USA)(Alibaba Group...
  • Drowning Detection by Image Processing-Featured Drowning Detection by Image Processing In this research, I design an algorithm for image processing of a swimmer in pool. This algorithm diagnostics the swimmer status. Every time graph sho...
  • Shangul Mangul Habeangur,3d Game,AI,Ahmadreza razian,boz,boz boze ghandi,شنگول منگول حبه انگور,بازی آموزشی کودکان,آموزش شهروندی,آموزش ترافیک,آموزش بازیافت Shangul Mangul HabeAngur Shangul Mangul HabeAngur (City of Goats) is a game for child (4-8 years). they learn how be useful in the city and respect to people. Persian n...
  • 1st National Conference on Computer Games-Challenges and Opportunities 2016-Featured 1st National Conference on Computer Games-Challenges and Opportunities 2016 According to the public relations and information center of the presidency vice presidency for science and technology affairs, the University of Isfah...
  • Design an algorithm to improve edges and image enhancement for under-sea color images in Persian Gulf-Featured 3rd International Conference on The Persian Gulf Oceanography 2016 Persian Gulf and Hormuz strait is one of important world geographical areas because of large oil mines and oil transportation,so it has strategic and...
  • 2nd Symposium on psychological disorders in children and adolescents 2016 2nd Symposium on psychological disorders in children and adolescents 2016 2nd Symposium on psychological disorders in children and adolescents 2016 Faculty of Nursing and Midwifery – University of Isfahan – 2 Aug 2016 - Ass...
  • MyCity-Featured My City This game is a city simulation in 3d view. Gamer must progress the city and create building for people. This game is simular the Simcity.
  • GPU vs CPU Featured CUDA Optimizing raytracing algorithm using CUDA Abstract Now, there are many codes to generate images using raytracing algorithm, which can run on CPU or GPU in single or multi-thread methods. In t...
Popular posts
Interested
About me

My name is Sayed Ahmadreza Razian and I am a graduate of the master degree in Artificial intelligence .
Click here to CV Resume page

Related topics such as image processing, machine vision, virtual reality, machine learning, data mining, and monitoring systems are my research interests, and I intend to pursue a PhD in one of these fields.

جهت نمایش صفحه معرفی و رزومه کلیک کنید

My Scientific expertise
  • Image processing
  • Machine vision
  • Machine learning
  • Pattern recognition
  • Data mining - Big Data
  • CUDA Programming
  • Game and Virtual reality

Download Nokte as Free


Coming Soon....

Greatest hits

Waiting hurts. Forgetting hurts. But not knowing which decision to take can sometimes be the most painful.

Paulo Coelho

Anyone who has never made a mistake has never tried anything new.

Albert Einstein

It’s the possibility of having a dream come true that makes life interesting.

Paulo Coelho

You are what you believe yourself to be.

Paulo Coelho

The fear of death is the most unjustified of all fears, for there’s no risk of accident for someone who’s dead.

Albert Einstein

Imagination is more important than knowledge.

Albert Einstein

Gravitation is not responsible for people falling in love.

Albert Einstein

One day you will wake up and there won’t be any more time to do the things you’ve always wanted. Do it now.

Paulo Coelho


Site by images
Recent News Posts