Medical and LifeSciences
Researchers at The Australian National University are using deep learning and NVIDIA technologies to better understand the progression of Parkinson’s disease.
Currently it is difficult to determine what type of Parkinson’s someone has or how quickly the condition will progress.
The study will be conducted over the next five years at the Canberra Hospital in Australia and will involve 120 people suffering from the disease and an equal number of non-sufferers as a controlled group.
“There are different types of Parkinson’s that can look similar at the point of onset, but they progress very differently,” says Dr Deborah Apthorp of the ANU Research School of Psychology. “We are hoping the information we collect will differentiate between these different conditions.”
Dr Apthorp said the research will measure brain imaging, eye tracking, visual perception and postural sway.
From the data collected during the study, the researchers will be using a GeForce GTX 1070 GPU and cuDNN to train their deep learning models to help find patterns that indicate degradation of motor function correlating with Parkinson’s.
The researchers plan to incorporate virtual reality into their work by having the sufferers’ wear head-mounted displays (HMDs), which will help them better understand how self-motion perception is altered in Parkinson’s disease, and use stimuli that mimics the visual scene during self-motion.
“Additionally, we would like to explore the use of eye tracking built into HMDs, which is a much lower cost alternative to a full research eye tracking system and reduces the amount of equipment into a highly portable and versatile single piece of equipment,” says researcher Alex Smith.
More than one million athletes experience a concussion each year in the United States.
Researchers from a California-based startup Neural Analytics have designed a portable headset device that maps blood flow in the brain, which may make it easier to recognize concussions.
“There is growing evidence that concussions can change the blood flow in the brain,” said study author Robert Hamilton, PhD, co-founder of Neural Analytics and a member of the American Academy of Neurology. “While such changes may be detected with MRI, we believe there may be a less expensive and portable way to measure these changes with a transcranial Doppler (TCD) device.”
Using NVIDIA GPUs and deep learning, the device is able to distinguish the brains of young high school athletes who had recently suffered a traumatic brain injury from those of healthy subjects with 83 percent accuracy.
“This research suggests that this advanced form of ultrasound may provide a more accurate diagnosis of concussion,” said Hamilton. “While more research is needed, the hope is such a tool could one day be used on the sidelines to help determine more quickly whether an athlete needs further testing.”
New research suggests that video games have a positive effect on children‘s development.
Thirteen researchers from Columbia University and Paris Descartes University found that elementary-age children who played video games for five or more hours per week – about 20 percent of the children surveyed – had fewer psychological problems and higher overall academic performance than their peers who did not play video games. In fact, the game players were described by teachers as better students, both academically and in social adjustment.
Part of the School Children Mental Health Europe project, the report analyzed the video game usage, academic performance, and behavior of nearly 3,200 European children between the ages of six and 11. Observations and data collected by parents and teachers were also considered to help guide researchers.“I think what we’re seeing here is the evolution of gaming modern society. Video games are now a part of a normal childhood,” Katherine Keyes, one of the 13 authors of the study, told U.S. News. “What we’re seeing here is that kids who play a lot of video games are socially integrated, they’re prosocial, they have good school functioning and we don’t see an association with adverse mental health outcomes.”
As the research concludes, video games provide educational, social, and psychological benefits for children.
You can read the entire study online in the journal Social Psychiatry and Psychiatric Epidemiology.
What is Virtual Reality? Virtual Reality is a set of computer technologies which, when combined, provide an interface to a computer-generated world, and in particular, provide such a convincing interface that the user believes he is actually in a three dimensional computer-generated world. This computer generated world may be a model of a real-world object, such as a house; it might be an abstract world that does not exist in a real sense but is understood by humans, such as a chemical molecule or a representation of a set of data; or it might be in a completely imaginary science fiction world.
A key feature is that the user believes that he is actually in this different world. A second key feature of Virtual Reality is that if the human moves his head, arms or legs, the shift of visual cues must be those he would expect in a real world. In other words, besides immersion, there must be navigation and interaction.
1. Computer mediated sensing
Different kinds of VE technology support different modes of interaction.
- One kind of VE technology employs subjective immersion, in which the user interacts as if using an ordinary desktop computer system. The user views the system from the usual close but remote position and interacts through standard or special-purpose input or control devices such as keyboards, mouse controls, trackballs, joysticks, or force balls. Three dimensions are represented on 3D displays through the use of simulation software employing perspective, object rotation, object interposition, relative size, shading, etc.
- The other kind of VE technology uses spatial immersion. The user is required to get inside the virtual space by wearing special equipment, typically at least a helmet mounted display that bears sensors to determine precise helmet position within the VE system’s range, in order to interact with the simulated environment. The user is thus immersed in a quasi-3D virtual space in which objects of interest appear to exist and events occur above, below, and around in all directions toward which the user turns his or her head.
Here follows a description of the typical hardware needed to run a virtual reality system. It will later be discussed whether it’s advisable to maintain all of these components when trying to implement a VE on a PC. What is important here is to focus on a standard architecture, as it is usually described in literature.
Virtual Reality is often used as comprehensive term to describe the use of 3-D graphics displays to explore a computer generated world. This interaction between man and machine can happen according to different styles that are representing the actual possibility and potential of the technology. The different styles of interaction depend upon the way the virtual environment is represented. We can identify at least six interaction styles that refer to the way the simulated/virtual environment is represented: desktop, projected, immersive, Cave, telepresence, augmented.
1) Desktop VR
The most popular type and is based upon the concept that the potential user interacts with the computer screen without being fully immersed and surrounded by the computer-generated environment. The feeling of subjective immersion can be improved through stereoscopic vision (i.e., CrystalEyes) and operative action with interface can be guaranteed via pointing devices (mouse, joystick) or typical VR peripherals such as Dataglove. Desktop VR is used mainly in games but professional application are currently widely diffused. Example of professional application domains come from general industrial design, engineering, architecture and the visualisation of data streams. The main benefit of desktop VR is its limited cost and less involving use of interacting technology, as a matter of fact according to different scenarios of use it might be more appropriate a less “invasive” device such as a CRT monitor than a wired HMD. It seems that desktop VR is particularly successful with the inspection of sample objects as opposed to immersed VR where the best exploitation is with the exploration of spaces. Up to date CAD/CAM systems slowly shifted in their performance towards the quality of VR interaction when they allowed the user to manipulate 3-d objects as if they were real.
2) Projected VR
This is technological solution often seen in VR-Art shows and in VR leisure applications. It is based upon the overlapping of the image of the real user on the computer generated world. That is to say that the user can see his image overlaid the simulated environment. A special movement tracking device can capture the movements of the user and insert them so that they can cause actions and re-actions in the virtual world.
3) Immersive VR
With this type of solution the user appears to be fully inserted in the computer generated environment. This illusion is rendered by providing HMD, with 3-D viewing and a system of head tracking to guarantee the exact correspondence and co-ordination of user’s movements with the fee-back of the environment.
Cave is a small room where a computer generated world is projected on the walls. The projection is made on both front and side walls. This solution is particularly suitable for collective VR experience because it allows different people to share the same experience at the same time. It seems that this technological solution is particularly appropriate for cockpit simulations as it allows the views from different sides of a imaginary vehicle.
Users can influence and operate in a world that is real but in a different location. The users can observe the current situation with remote cameras and achieve actions via robotic and electronic arms. Telepresence is used for remote surgical operations and for the exploration/manipulation of hazardous environments (i.e., space, underwater, radioactive.
Virtual Reality is the product of a trick. The VR system tricks the user into believing that the Virtual Environment by which he feels himself surrounded is the actual, real environment. This is made possible by several different devices, each with its own technology, which produce each a specific aspect of the VE, relevant for a specific sense. We will discuss hardware relevant for the three senses which are to be immersed in the VE: sight, touch and hearing.
This VR solution is an invasive strategy towards reality. As a matter of fact user’s view of the world is supplemented with virtual objects and items whose meaning is aimed at enriching the information content of the real environment. In military applications for instance vision performance is enhanced by providing the pictograms that anticipate the presence of other entities out of sight.
In the Information Technology trend, Virtual Reality has been identified as one of the most promising development areas. As it happens with all the innovative applications this new technology is not excluded from the generation of problems and concerns regarding its implementation in operative working domains. Yet we are witnessing a constant improvement in marketing perspective of both quality of applicative VR systems and receptiveness of potential customers. This is due to mainly three reasons: (1) the decrease of the cost of VR systems and devices (2) the constant improvement of performance reliability of the technology, (3) the extremely valuable economic benefits derived from VR use in its various forms and purposes (training, simulation, design). So we can affirm the consolidation of a class of technology that can positively be stated as “virtual reality” and appraised like any other novel high tech industry. This technology has been confidently adopted in a number of markets, and has the potential to penetrate in many more.
The VR market is at present immature, without any clear market leaders or clear segmentation of activities. In a recent paper prepared for the European Commission’s IT Policy Analysis Unit (DG III/A.5) on VR, PVN (Belgium) estimates a market of $570 million (MECU 483) by 1998. This figure includes both hardware and software. The bad news for Europe is that it is forecast to have only $115 million (MECU 97) of that market, a poor third behind the USA and Japan.
A study into telematics applications of virtual environments, carried out by Sema Group (F), Fraunhofer IAO (D) and MIT’s Research Laboratory for Electronics (USA) for the Commission’s DG XIII/C in 1994, predicted a market evaluation of “roughly MECU 400 – MECU 500 by 1998” with a growth rate “very high, approaching 70-80% per year”. What is perhaps less disputed is that the major market activity is in entertainment equipment.
Frost & Sullivan’s 1994 VR market report stated that about 250 companies existed in the USA and only 25 in other countries which claim to make even part of their revenue from VR. Of these, no one firm earned more than $10 million (MECU 8.4) from VR alone. A recent Financial Times Report listed four types of commercial VR company – software companies, component manufacturers, system companies and ‘other industry participants’. As might be expected, the vast majority of such companies are US-based. Only two European company, Superscape and Division of the UK, is listed under software companies and only one European Company, Virtuality, is listed under component manufacturers.
Although this listing was not ranked and was definitely not exhaustive, most activity does seem to be taking place in the USA. The wider availability of venture capital and the tendency of small firms to ‘spin off’ from others may account in part for this.
According to the recent (Jan. 96) Business Communications Company, Inc. report “RGB-175/The Virtual Reality Business”, by 1996, more than 300 companies will settle sales for about $255 million worth of VR products and services and behind this figures lay as VR customers many multinational brands of military and medical products. By 2000, the VR industry will be posting annual sales of over $1 billion and reaching an annual average growth rate (MGR) of 33%.
In July of 1996 Ovum, the UK market research company published another survey on Virtual Reality (VR) markets: ‘Virtual Reality: Business Applications, Markets and Opportunities. Ovum expects the ‘killer application’ of VR to be in 3D interfaces to the Internet, used for promoting products and services on the World Wide Web (WWW). It predicts that in the next five years, VR will be widely used as a GUI (graphical user interface) for standard business software, thus replacing icon-based GUIs for such applications as database, business systems and networked management software. According to the survey, a large proportion of companies polled indicated that they would use PC based VR training applications for their employees.
Regarding the present uptake of VR in business, the report concludes that ?companies are finding virtual reality an important source of competitive advantage? and that ?although some companies are taking their time to evaluate VR, which is slowing down the speed of market lift-off, many are reporting significant benefits and are increasing their use of VR technology.? It explains this expected increase in uptake by saying that ?In many cases, companies have made cost savings of over US$1 million. They have experienced faster time to market, fewer mistakes than when using CAD technologies, greater efficiency in working methods and improved quality in final products.?
The report predicts that the VR market will grow from US$134.9 million in 1995 to just over US$1 billion by the year 2001 and that the largest growth sector will be in the software sector with a 58 per cent annual growth in this period.
Another significant finding of the report is that the business market for VR in 1995 represented 65 per cent of the total, with entertainment applications accounting for only 35 per cent. VR is normally seen to be of major significance to the games market?. it is not known whether, and how, the authors distinguish between entertainment and ?the entertainment business?.
The Ovum survey foresees a radical shift in how companies will be using VR between now and the year 2001. Today the majority of VR applications are in design automation: virtual prototyping, interior design and ergonomics, and architectural and engineering design. Expensive, workstation-based systems currently dominate, accounting for 43 per cent of the market. By 2001, however, PC-based VR technology will account for 46 per cent of the business market, where most of the applications will be non-immersive, using computer screens instead of headsets.
Virtual Reality Market Forecasts by Application ($ millions, constant 1995 )
|Instructional & Developmental||70||95||355||31|
|Design & Development VR||25||30||150||40|
|Medical Treatment VR||10||20||50||20|
|Source: Business Communications Company, Inc., GB-175, The Virtual Reality Business, 1996|
Applicative domains and major marketing areas
At the current state of the situation all marketing experts converge on the fact that the major market activity is entertainment equipment: leisure technology uses account for the largest VR market value, and are foreseen to continue growing at a 35% AAGR to the year 2000 (see table). The critical mass in marketing terms will be reached with high-scale produced single-user entertainment VR system, this will be the propelling force pushing the market growth from a current 1995 value of $110 million to $500 million by year 2000.
Home and entertainment
The great market expansion is expect for site- based entertainment. This expectation is based upon the evaluation two factors: the low saturation, and dramatic decrease of prices. This phenomena will allow VR technology to be used by all facets of society, including commercial/industrial, the government, military, and university and secondary schools at a stage not comparable with any previous existing situation. A great role will also be covered with in the support to education in general, for instance the instructional and developmental market is expected to widen its share from a $95 million 1995 market figure to $355 million by 2000, resulting in an AAGR of 31%. The dimension of this increase will affect technical/engineering colleges and universities, and the “developmental” VR includes spending on advanced, but as yet non-commercial applications, along with pure science and research systems not included in the other categories.
Industrial and Scientific Design
Applications of design and development VR market are in engineering, architecture and chemical design and development a constant shift will bring performance of CAD/ CAMM application to the standards of Virtual Reality applications . This market will grow from a 1995 market value of $30 million, to $150 million by 2000, reaching an AAGR of 40%. Medical treatment VR market will also sustain growth. The 1995 market value of $20 million is projected to reach $50 million by 2000, reaching a 20% AAGR.
The searching for common standards
Current VR products employ proprietary hardware and software. There is little doubt that incompatibility between different systems is restricting market growth at present. It is probable that as the market matures, certain de facto standards will emerge, perhaps when major players become involved. It is probable that the VR market will follow the route of the real-time financial information markets which found that adopting an open systems approach did not damage sales, as had been feared, but helped encourage the growth of the marketplace. According to the IMO – Information Group at Policy Studies Institute, London (August 95 – VIRTUAL REALITY: THE TECHNOLOGY AND ITS APPLICATIONS), “in the future an open systems approach will emerge for VR as well”. At that point, the market is likely to expand considerably.
However, the cost of VR equipment is falling rapidly. For example, headgear prices have already fallen from hundreds of thousands of dollars to $200 (ECU 169), and basic VR software packages are available commercially for $100 (ECU 85), or can be downloaded from the Internet. Simple VR games software is available in the USA for $70 (ECU 59).
The seminal efforts that gave rise to VR took place in the US. Funding from EC organisations has been slower in coming than in the US, where the Office of Naval Research, National Science Foundation, and Advanced Research Projects Agency now fund VR research and the National Aeronautics and Space Administration has been a long-time developer. This situation is perhaps attributable to the large cost associated with VR until quite recently. However the importance of VR is clearly understood in Europe and progress is now going forward across the entire spectrum of virtual reality, with special emphasis on industrial and commercial applications.
Europe encompasses various countries and cultures, and acceptance of the importance of VR has not been uniform. Interest by British Aerospace, the presence of the parallel processing company Inmos (makers of the Transputer), and early funding by the Department for Trade and Industry are cited by UK researchers as factors that drove research in the UK in the mid-to-late 1980s. This resulted in technology transfer that has produced several successful commercial efforts. More recently, German laboratories and institutions have become active in applying immersion technology to a broad range of applications. France has several of Europe’s leading research institutions for machine vision, robotics, and related technologies that affect VR, but has been less active in developing systems that provide interactive immersion. Most other West European countries have some VR R&D.
In the last two years, the EC organised several events to evaluate VR as a topic for the next research initiative. Recently EC presented one study titled: “Telematics applications of VE – The use of Virtual Environment Techniques in the Application of Telematics to Health Care, Transport, Training and the Disabled and Elderly”. This study was the third activity in a row starting with a workshop in March 1993 in Brussels in which was tried to make some kind of a status report and start the process of gathering recommendations on how to incorporate VR in future EC programmes. The second activity was a small report creating the basis for a larger study, which finally was carried out by a team from Fraunhofer Institute, SEMA Group and MIT.
The “Telematics Application” shows a small section on VE technologies, VE applications (generic use of VE technological capabilities, evaluation of the market) and treats then each of the mentioned fields (Education/Training, Transport, Health Care, and Elderly and Handicapped) and finishes of with potential actions for the TAP programme. In the health area the reports states: “In effect, the objective is to validate the 3-D approaches of VE, and evaluate their benefits for future health care systems. In parallel, other projects aimed at providing basic building blocks for future uses in VE- based medical applications are also of interest. They concern digital and computational models of the human body or critical organs”. The report stresses the use of VE in minimally invasive surgery, surgical decision support and training of surgeons, doctors and students. It also finds a use in evaluation of human interfaces and other factors in the design of critical components of new health care facilities.
EC funded projects/working groups relevant to VREPAR
The European Strategic Program for Research and Development (Esprit II) funded a handful of ongoing VR projects. Glad-in-Art is developing a glove-exoskeleton interface system to manipulate virtual objects, while SCATIS intends to integrate room acoustics into virtual worlds, and Humanoid concentrates on the development and simulation of virtual humans.
The call for proposals for Esprit III did not include a specific VR component. However, VR was explicitly mentioned within the basic research and multimedia components (two of the seven program areas). Between the funded studies we remember FIVE (Framework for Immersive Virtual Environments).
Other VR projects deal with Virtual Environment on Multi-Modal Interfaces (MIAMI and VETIR). VETIR deals with the use of virtual environment technologies in motor disabilities’ rehabilitation technology Initiative for Disabled and Elderly People.
Three important aspects of virtual reality systems offer new possibilities to medical treatment:
- How They Are Controlled
Present alternate computer access systems accept only one or at most two modes of input at a time. The computer can be controlled by single modes such as pressing keys on a keyboard, pointing to an on-screen keyboard with a head pointer, or hitting a switch when the computer presents the desired choice, but present computers do not recognize facial expressions, idiosyncratic gestures, or monitor actions from several body parts at a time. Most computer interfaces accept only precise, discrete input. Thus many communicative acts are ignored and the subtleness and richness of the human communicative gesture are lost. This results in slow, energy-intensive computer interfaces. Virtual reality systems open the input channel: the potential is there to monitor movements or actions from any body part or many body parts at the same time. All properties of the movement can be captured, not just contact of a body part with an effector.
Given that these actions are monitored, why can the user control more in the virtual world than in the real world? In the virtual environment these actions or signals can be processed in a number of ways. They can be translated into other actions that have more effect on the world being controlled, for example, virtual objects could be pushed by blowing, pulled by sipping, and grasped by jaw closure. Proportional properties such as force, direction, and speed could become interchangeable allowing the person with arthritic joints to push something harder, without the associated pain, by simply moving faster. They could be filtered to achieve a cleaner signal. Actions can be amplified thus movement of the index finger could guide a tennis racket. Alternately movements could be attenuated giving the individual with large, poorly controlled movement more precise control of finer actions.
Because VR systems display feedback in multiple modes, feedback and prompts can be translated into alternate senses for users with sensory impairments. The environment could be reduced in size to get the larger or overall perspective (without the “looking through a straw effect” usually experienced when using screen readers or tactile displays). Objects and people could show speech bubbles for the person who is deaf. Sounds could be translated into vibrations or into a register that is easier to pick up. Environmental noises can be selectively filtered out. The user with a spinal cord injury with no sensation in her hands could receive force and density feedback at the shoulder, neck, or head.
For the individual multimodal feedback ensures that the visual channel is not overloaded. Vision is the primary feedback channel of present-day computers; frequently the message is further distorted and alienated by representation through text. It is very difficult to represent force, resistance, density, temperature, pitch, etc., through vision alone. Virtual reality presents information in alternate ways and in more than one way. Sensory redundancy promotes learning and integration of concepts.
- What Is Controlled
The final advantage is what is controlled. Until the last decade computers were used to control numbers and text by entering numbers and text using a keyboard. Recent direct manipulation interfaces have allowed the manipulation of iconic representations of text files or two dimensional graphic representations of objects through pointing devices such as mice (Brownlow, 1989). The objective of direct manipulation environments was to provide an interface that more directly mimics the manipulation of objects in the real world. The latest step in that trend, virtual reality systems, allows the manipulation of multisensory representations of entire environments by natural actions and gestures. This last step may make accessible valuable experiences missed due to physical or sensory impairments. These experiences may include early object-centered play, and early independent mobility.
In virtual environments we can simulate inaccessible or risky experiences, allowing the user to extract the lessons to be learned without the inherent risk. Virtual reality systems can allow users to extend their world knowledge.
According to an assessment on current diffusion of VR in the medical sector, gathered by the Gartner Group, forecast of VR future in this area are quite promising. Within the medical application its strategic relevance will increase and gain importance. It is envisaged that by year 2000 despite possible technological barriers, virtual reality techniques will be integrated in endoscopic surgical procedures. VR will affect also the medical educational strategy for students as well as experienced practitioners, who will increasingly be involved in immersive simulated techniques. It is expected that these educational routines can become of routine by year 2005.
VR has been until now widely underused, probably because of prohibitive hardware costs, nevertheless this technology is pushing forward new challenges and advances that will materialise by year 2000. The medical use of VR will take place mainly in four domains:
- teaching: VR will reproduce environments or special conditions that will enable to educate medical personnel.
- simulation: VR will mix video and scanner images to represent and plan surgical intervention, effects of therapy.
- diagnostics: it will be possible to forecast the effects of complex combinations of healing treatments.
- therapy: A valuable exploitation of VR in the medical sector is seen with interest in the therapy of psychiatric/psychological disorders such as acrophobia, claustrophobia, nyctophobia, agoraphobia, eating disorders, etc. Therapeutic techniques will include practices that will allow the patients to reproduce and master problem environments.
For a more detailed description of the use of VR in health care you can read the paper: VR in Health Care: A Survey
5. Issues to be solved
Although the technology is mature enough to have different applications, there are key issues to be resolved for its use for practical applications.
- costs: The product seem to be “a solution in search of a problem”. As with early computer graphics products, the entry-level costs are relatively prohibitive. A complete VR environment, including workstations, goggles, body suits, and software, is in the range of KEcu 70.000 to KEcu 1.000.000.
- lack of standard and reference parameters: The hyperbole and sensational press coverage associated with some of these technologies have led many potential users to overestimate the actual capabilities of existing systems. Many of them must actually develop the technology significantly for their specific tasks. Unless their expertise includes knowledge of the human-machine interface requirements for their application, their resulting product will rarely get beyond a “conceptual demo” that lacks practical utility.
- human factors: The premise of VE seems to be to enhance the interaction between people and their systems. It thus becomes very important to understand how people perceive and interpret events in their environments, both in and out of virtual representation of reality. We must address issues of human performance to understand how to develop and implement VE technology that people can use comfortably and effectively. Fundamental questions remain about how people interact with the systems, how they may be used to enhance and augment cognitive performance in such environments, and how they can best be employed for instruction, training, and other people oriented applications.
The marketing situation of VR is very fluid, this means that the technology while being ready for professional applications is not at the stage of settling definite standards and definite reference points in all perspectives, including possible leading manufacturers, compatibility specifications, performance levels, economical costs and human expertise. So standing the situation it is heavily characterised by uncertainty.
This uncertainty should not be confused with lack of confidence on the promising outcomes of the technology, but instead with the rapid mutation and evolution that characterises all information technology markets. For what concerns the project these reflections sound as warning in the adoption of solutions that need to be considered as a short term answer to a contingent problem. A special concern must be raised to a continuos chase of the last up to date technological product release.
In the general aim of the project we take advantage of the capillary diffusion of the PC based technology and to the best associated hardware and software devices available that can ensure both reliability and availability in different domains independently of the different constraints posed by geographical location.
Phobics typically panic or become anxious when they encounter the object or situation that makes them afraid, even though they know the object or situation (e.g., a small house spider) is not that dangerous. Such unrealistic or excessive fears of objects or situations is a psychological disorder that can makes life miserable for years. Exposure therapy has proved effective for many different types of phobias, including spider phobia. Exposure therapy is a clinical treatment based on gradually and systematically exposing the phobic person to the feared object or situation a little at a time, starting very slowly, and calming them. Little by little their fear decreases and they become more comfortable with spiders. They will probably always be a little creeped out by spiders, but therapy can train them not to panic. After treatment, most “former phobics” start living life more fully. Success overcoming their fear can lead to increased self-confidence, which in turn often has other positive benefits.
In vivo exposure therapy, is a combination of cognitive psychology and behavioral therapy (Cognitive-Behavioral therapy, which is not Freudian). People are taught to think a little differently when they think about spiders (this is the cognitive part of the therapy, where cognition means “thinking”). In addition, during treatment, phobics are deconditioned using stimulus-response learning (and unlearning). This is the Behavioral part of the treatment.
Pavlov, an early behaviorist, paired a stimulus (a bell) with the presence of food. Every time the dog heard a bell, they got some food. After doing this enough times, the dogs started to associate the bell with the food. The dogs would start salivating when they heard the bell, even if there was no food present. It is believed that spider phobia is due at least in part to a similar Stimulus-Response association. The spider, the stimulus, evokes a response, fear and anxiety. Every time the phobic runs in fear from a spider, it strengthens or at least helps maintain this association. Avoidance feed phobias.
Did you know that the Stimulus-Response conditioning can be reversed? A dog that has been conditioned to salivate when they hear a bell can be untrained! If you ring a bell without presenting food a bunch of times, pretty soon the association or link, the Stimulus-Response association between bell and food disappears, and the dog no longer salivates (or salivates much less) when they hear the bell. The behavioral therapy part of VR exposure therapy uses a similar approach to treat spider phobics. With in vivo (in life) exposure therapy, under a therapist’s supervision and guidance, rather than avoiding it, the phobic slowly approaches the thing they are afraid of in the real world…the phobics initially display a rapid increase in anxiety, but if they hold their ground instead of fleeing, their fear and anxiety will actually habituate. They stop sweating, their heart rate slows down, they feel less anxiety, even though they are standing fairly near a spider! It is as if their nervous systems start to get bored with the spider. During this phase of in vivo exposure therapy when their anxiety is going down in the presence of the live spider (e.g., a tarantula in a terrarium), they are reversing the Stimulus-Response association. The stimulus-response association is sort of “cancelled out” by the new association between the presence of the spider, and a DROP in anxiety! That is, they start to associate a spider with becoming LESS anxious. In addition, the therapist explains a number of things to the patient, and helps the patient think differently about spiders. This helps the phobic think differently about their own anxiety (this is the cognitive portion of the treatment). It will be a few years before VR exposure therapy is more widely available. While it is possible to go get treated with VR right now, you may not be able to go al the way to Los Angeles to get it.
On the West Coast, call BrendaWiederhold, MBA, Ph.D., at the Calif School of Professional Psychology in San Diego, CA for more information about getting VR exposure therapy (858) 623-2777, Ext. 415 Bwiederhold@cspp.edu, http://www.vrphobia.com
On the East Coast (Manhattan), contact JoAnn Difede, Ph.D., Director of The Program for Anxiety and Traumatic Stress Studies at Weill Cornell Medical College’s Department of Psychiatry. http://www.patss.com/
In the Seattle area, Brian Neville, Ph.D., LLC is now using a number of techniques including VR exposure therapy to treat a number of phobias (private clinical practice in Woodenville, WA). http://home.covad.net/~neville1959/index.htm, 18500 156th Ave NE, Suite 202, Woodinville, WA 98072
425-481-5700, ext. 10#
or to find treatment providers worldwide go to http://www.virtuallybetter.com/
So, if you can’t find a place to get VR therapy, keep in mind that “in vivo” exposure therapy works really well too, and the Psych dept (preferably the clinical psychology, or Psychiatry dept of a University near your city can refer you to a good Cognitive-Behavioral therapist.
Claustrophobia, fear of heights, fear of spiders, fear of cats, fear of dogs, fear of driving, fear of flying, fear of public speaking are common examples of specific phobias (there are numerous other examples). Cognitive-behavioural therapy for specific phobias with in vivo exposure therapy has a VERY high success rate, and typically takes 12 hours or less, a very small amount of time, considering how long some other problems take. People with phobias rarely seek treatment for their problem! Most just limp through life avoiding the thing they are afraid of, in constant fear of being discovered. In other words, many phobics are afraid of the embarassing panic attack they will have, i.e., how bizarre they will act if they happen to run into a spider. Some are more worried about this over-reaction and the social consequences than they are about the spider itself.
In some situations, fears can be dangerous. For example, if the person nearly wrecks their car when a spider drops in their lap out of a visor, or when a patient who needs a brain scan can’t go into the confined brain scanner because of claustrophobia. Over 80% of people who seek Cognitive-Behavioral therapy (e.g., in vivo exposure therapy) for their phobias no longer panic when they encounter a spider, and this typically holds true indefinitely (ie., people tend to remain cured once successfully treated).
Despite the fact that this type of treatment is so fast and effective, only a small proportion of spider phobics ever actually seek treatment for their problem. The reason is fairly obvious. Understandably, THEY DON’T WANT TO GO ANYWHERE NEAR A LIVE SPIDER, EVEN FOR THERAPY.
Virtual reality to the rescue
In collaboration with others, Barbara Rothbaum (a clinical psychologist from Emory) and Larry Hodges (a computer science expert from George Tech) were the principle investigators in the first published Journal study on using VR exposure therapy for treating a phobia (fear of heights, see http://www.cc.gatech.edu/gvu/virtual/). This was followed by a publication about using of VR for treating spider phobia by our research group at the HITLab (Carlin and Hoffman at the University of Washington in Seattle, who have recently been working with Azucena Garcia and Christina Botella from Spain, (more about this later).
Since then, down in Atlanta Georgia, Rothbaum and Hodges have had great success using VR exposure therapy to treating fear of flying, and they have ambitiously had some encouraging preliminary results treating post-traumatic stress disorder in Vietnam Vets, a disorder notoriously difficult to help (unlike phobias, which are easy to treat quickly and successfully). Hodges and Rothbaum are currently exploring the use of VR for treating fear of public speaking as well. Several of the virtual worlds developed by Hodges and Rothbaum are now commercially available for clinicians interested in using VR exposure therapy with their patients (see http://www.virtuallybetter.com/).
Brenda Wiederhold has spearheaded the creation of a treatment center in Southern California. She and her colleagues have treated over 100 phobics with Hodge’s virtual reality exposure therapy software. Brenda Wiederhold’s group presently treats fear of heights and fear of flying and a number of other problems.
Albert Carlin and Hunter Hoffman published the second Journal paper on VR exposure therapy. At the suggestion of one of Dr. Carlin’s patients, Al and Hunter extended Rothbaum and Hodges idea of using immersive virtual reality for exposure therapy to a new type of fear: spider phobia. It was actually the idea of Miss Muffet, the first patient they treated together. Prior to treatment, Miss Muffet had been clinically phobic for nearly 20 years and had acquired a number of spider-related obsessive-compulsive behaviors. She routinely fumigated her car with pesticides and smoke to get rid of spiders. She sealed all bedroom windows with duct tape each night after scanning the room for spiders. She was hypervigilant, searching for spiders wherever she went, and avoiding walkways where she might find one. After washing her clothes, she immediately put her clothing inside a sealed plastic bag, to make sure it remained free of spiders. Over the years, her condition became worse. When her fear made her hesitant to leave home (a very extreme phobia), she finally sought therapy.
Researcher Hunter Hoffman, U.W. holding a virtual spider near the face of a patient as part of virtual reality phobia exposure therapy to reduce fear of spiders. In the immersive virtual world called SpiderWorld, patients can reach out and touch a furry toy spider, adding tactile cues to the virtual image, creating the illusion that they are physically touching the virtual spider. Tactile augmentation was shown to double treatment effectness compared to ordinary VR. photo Mary Levin, U.W., with permission from Hunter Hoffman, U .W. (Picture on right)…..An image of what patients see (in 3-D) in SpiderWorld… as they grab a wiggly legged virtual tarantula.
During the 12, one hour VR therapy sessions at the U.W. Human Interface Technology laboratory (HITLab) in Seattle, Miss Muffet started very slowly. First she stodd completely across the virtual world from the virtual spider. Slowly she got a little closer, her progress closely monitored by Al and Hunter who watched what she was seeing in VR, which was also displayed to them on a compute monitor. In later sessions, after she had lost some of her fear of spiders, she was sometimes encouraged to pick up the virtual spider and/or web with her cyberhand and place it in orientations that were most anxiety provoking. Other times, the experimenter controlled the spider’s movements (unexpected jumps, etc). Some virtual spiders were placed in a cupboard with a spiderweb. Other virtual spiders climbed or dropped from their thread from the ceiling to the virtual kitchen floor. Eventually, after getting used to them, Miss Muffet could tolerate holding and picking up the virtual spiders without panicking. She could pull the spider’s legs off (initially this occurred accidently, and then deliberately at the experimenter’s request). A large brown virtual spider with photograph-quality texture-mapped fur (made by Scott Rousseau and Ari Hollander, see www.imprintit.com), and later re-made with animations by Duff Hendrickson), and a smaller black spider and an associated 3-D web were employed (by far the best spider (just kidding) was the one Hunter made, virtual black widow spider, which reminded Miss Muffet of the spiders she saw in her nightmares described next). The black one was flawed in that it was possible to pull the virtual legs off, if one grabbed it right. This turned out to be good.
After only two, one-hour Virtual Reality exposure therapy sessions, Miss Muffet was noticing some very important progress. For example, prior to VR treatment, she had a recurring nightmare about spiders (very scary). After her second VR exposure session, she had her nightmare again that night, but it was no longer scary. In fact, in her dream, she was able to talk to the spiders for the first time, and scolded them for scaring her. “Don’t feel bad lady, we scare everyone”, said their cigar smoking thug leader in her dream. “Well STOP IT” she told them in her dream. The magic spell the spiders had on her was broken by her recent VR exposure therapy. Really, the truth is, the magic spell that SHE had on HERSELF was broken. VR allowed her to reverse the spell she had somehow cast on herself earlier in life, without intending to. When she came in for her third one-hour VR treatment session, there was a sparkle in her eye. She could tell she was making progress, and that gave her confidence and bravery and made her hungry to finish the job of curing herself. After several more one-hour VR sessions over several weeks (one treatment per week for three months total), she reported to us that she had had the nightmare yet again, but this time, the spiders in her dream were gone…only cobwebs remained. This routine with the dreams may only happen with this one patient, its hard to predict, but it was very interesting to us. As a psychologist interested in how the human mind works, this experience treating spider phobics with VR has been fascinating for me (Hunter).
Toward the end of Miss Muffet’s therapy (e.g., after about nine, one-hour sessions), Al Carlin and Hunter started running out of new tricks to use to evoke anxiety from Miss Muffet. Miss Muffet reached out with her cyberhand in the virtual world to touch the virtual spider, but contrary to her earlier panic reactions, she had only a little anxiety now, since she had gotten used to grabbing the virtual spider.
In order for therapeutic progress to continue, Hunter and Al had to come up with some new spider behaviors or new spider-related experiences that would initially evoke an anxiety response, so they could continue to habituate Miss Muffet. They tapped a technique called mixed reality Hunter had been studying in some other VR research. One wierd thing about virtual objects is…they are typically only visual illusions, when you reach out to touch a virtual spider, your cyberhand goes right through the spider. If you reach out to touch a virtual wall, typically your virtual hand sticks right through the wall like something from a Sci Fi movie. This quality of non-solidity is interesting and fun, but it detracts from VR’s realism. To give the virtual spider solidity and weight (cyberheft), Hunter rigged up a furry toy spider with a bad toupe, such that when Miss Muffet reached out to touch the virtual spider in the virtual world, her real hand simultaneously touched the furry toy spider in the real world! Although we told her it was coming, Miss Muffet was quite surprised when she had the illusion of physically touching the virtual spider. Suddenly, the virtual spider she had grown accustomed to touching without anxiety (i.e, during therapy), now evoked a huge anxiety response. But…as predicted, Miss Muffet even got used to this “mixed reality” spider. It is called mixed reality because it was part virtual …the visual animated spider in VR, and part real, the tactile cues from the real toy spider. See the following papers for more info on Hunter’s research on tactile augmentation or mixed reality at www.hitl.washington.edu/people/hunter/).
According to Miss Muffet, this extraordinary experience/illusion of physically groping the plump furry body of a Guyana bird-eating tarantula was a big turning point. She said after she had gotten over the anxiety that evoked, she was largely cured. After holding that virtual beast, an ordinary real spider in her real kitchen was not scary at all. A subsequent controlled experiment with 36 participants showed that Miss Muffet was right….exposure therapy culminating in the handling of a mixed reality spider increased therapeutic effectiveness compared to the same therapy without any mixed reality (e.g., with only virtual spiders that couldn’t be physically touched). See Hoffman, Garcia-Palacios, Carlin, Furness, III, and Botella, (2003).
Garcia-Palacios, A, Hoffman, HG, Kwong See, S, Tsai, A, Botella-Arbona, C. Redefining therapeutic success with VR exposure therapy. CyberPsychology and Behavior 2001;4:341-8.
Hoffman, HG, Garcia-Palacios, A, Carlin, C, Furness, TA III, Botella-Arbona, (2003). Interfaces that heal: Coupling real and virtual objects to cure spider phobia. International Journal of Human-Computer Interaction, 2003;16:283-300.
During the course of therapy the patient could also squash the virtual spiders with a mixed-reality ping pong paddle. These interactions in VR caused her great anxiety, including trembling, sweating, and dryness of mouth, and feeling on the verge of tears.
Prior to VR treatment, the patient filled out a fear-of-spiders questionnaire. A sample of 280 undergraduate psychology students filled out the same questionnaire as a comparison group. The undergrads received no treatment and gave their ratings only once. Initially, only one undergraduate had a higher fear-of-spiders score than the patient. After 12 weekly one-hour desensitization treatments for the patient, 29% (80 students) had higher fear of spiders scores than the patient.
The results are very encouraging. Importantly, this dramatic reduction in the patient’s fear of spiders is also reflected in the patient’s behavior in the real world. She stopped engaging in obsessive-compulsive spider rituals, and can now interact with real spiders with moderate but manageable emotion. Her improvement is so profound that she has time for new hobbies such as camping outdoors, something she would never have dreamed of doing prior to therapy. In fact, to her amazement, the story came full circle. Miss Muffet became the star of a Scientific American Frontiers program on SPIDERS! on PBS that featured the SPIDERWORLD developed by Hoffman and Carlin. She is shown at the top of this webpage, holding a real tarantula. (don’t do this at home). You can watch this free educational science documentary digital video clip about our use of virtual reality exposure therapy to treat Miss Muffet at PBS by clicking HERE (once at PBS, be sure to scroll down to the digital video story called “arachnophobia”.
She is the first spider phobia patient to be cured using immersive VR therapy. This case study (Carlin, Hoffman and Weghorst, 1997) provides converging evidence to the growing literature showing the effectiveness of VR for medical applications. We have since treated about 20 clinical phobics with a success rate of approximately 85% at the HITlab and continue to conduct research on this interesting topic. See Garcia-Palacios, A, Hoffman, HG, Carlin, C, Furness, TA III, Botella-Arbona, (2002). Virtual reality in the treatment of spider phobia: A controlled study. Behaviour Research and Therapy, 2002;40:983-993.
Rothbaum and Hodges were first, Carlin and Hoffman were second to publish, and…Botella and colleagues from Spain were the third group to publish a case study on using immersive VR exposure therapy for treating phobia. Interestingly, all three groups published in the journal named Behavioral Research and Therapy. Botella et al. created a VR treatment for claustrophobia, fear of enclosed spaces. Part of this treatment involves going into a fairly large virtual room. The patient controls the walls of this room, which close in on the patient in VR. As the walls close in, they make a noise like concrete scratching on concrete. Claustrophobia is a big problem for some people who need to have a brain scan but can’t bear to go into the brain scanner. Botella and colleagues are also having success using VR to treat severe anorexia. Botella’s active group in Spain (which includes Azucena Garcia-Palacios and several other talented clinical psychologists) is quickly becoming one of the top centers in the world for research on VR treatments for Psychological disorders.
Healthcare is one of the biggest adopters of virtual reality which encompasses surgery simulation, phobia treatment, robotic surgery and skills training.
One of the advantages of this technology is that it allows healthcare professionals to learn new skills as well as refreshing existing ones in a safe environment. Plus it allows this without causing any danger to the patients.
Human simulation software
One example of this is the HumanSim system which enables doctors, nurses and other medical personnel to interact with others in an interactive environment. They engage in training scenarios in which they have to interact with a patient but within a 3D environment only. This is an immersive experience which measures the participant’s emotions via a series of sensors.
Virtual reality diagnostics
Virtual reality is often used as a diagnostic tool in that it enables doctors to arrive at a diagnosis in conjunction with other methods such as MRI scans. This removes the need for invasive procedures or surgery.
Virtual robotic surgery
A popular use of this technology is in robotic surgery. This is where surgery is performed by means of a robotic device – controlled by a human surgeon, which reduces time and risk of complications. Virtual reality has been also been used for training purposes and, in the field of remote telesurgery in which surgery is performed by the surgeon at a separate location to the patient.
The main feature of this system is force feedback as the surgeon needs to be able to gauge the amount of pressure to use when performing a delicate procedure.
But there is an issue of time delay or latency which is a serious concern as any delay – even a fraction of a second – can feel abnormal to the surgeon and interrupt the procedure. So there needs to be precise force feedback in place to prevent this.
Robotic surgery and other issues relating to virtual reality and medicine can be found in the virtual reality and healthcare section. This section contains a list of individual articles which discuss virtual reality in surgery etc.
More Examples of Virtual Reality and Healthcare
This section looks at the various uses of VR in healthcare and is arranged as a series of the following articles:
- Advantages of virtual reality in medicine
- Virtual reality in dentistry
- Virtual reality in medicine
- Virtual reality in nursing
- Virtual reality in surgery
- Surgery simulation
- Virtual reality therapies
- Virtual reality in phobia treatment
- Virtual reality treatment for PTSD
- Virtual reality treatment for autism
- Virtual reality health issues
- Virtual reality for the disabled
Some of these articles contain further sub-articles. For example, the virtual reality in phobia treatment article links to a set of articles about individual phobias, e.g. arachnophobia, and how they are treated with this technology.
Most of us think of virtual reality in connection with surgery but this technology is used in non-surgical ways, for example as a diagnostic tool. It is used alongside other medical tests such as X-rays, scans and blood tests to help determine the cause of a particular medical condition. This often removes the need for further investigation, such as surgery, which is both time consuming and risky.
Augmented reality is another technology used in healthcare. If we return to the surgery example; with this technology, computer generated images are projected onto the part of the body to be treated or are combined with scanned real time images.
What is augmented reality? This is where computer generated images are superimposed onto a real world object with the aim of enhancing its qualities. Augmented reality is discussed in more detail as a separate section.
Editor’s note: This is one in a series of profiles of five finalists for NVIDIA’s 2016 Global Impact Award, which provides $150,000 to researchers using NVIDIA technology for groundbreaking work that addresses social, humanitarian and environmental problems.
Performing ocular microsurgery is about as hard as it sounds — and, until recently, eye surgeons had practically been flying blind in the operating room.
Doctors use surgical microscopes suspended over a patient’s eyes to correct conditions in the cornea and retina that lead to blindness. These have limited depth perception, however, which forces surgeons to rely on indirect lighting cues to discern the position of their tools relative to sensitive eye tissue.
But Joseph Izatt, an engineering professor at Duke University, and his team of graduate students are changing that. They’re using NVIDIA technology to give surgeons a 3D, stereoscopic live feed while they operate.
“This is some of the most challenging surgery there is because the tissues that they’re operating on are very delicate, and particularly valuable to their owners,” said Izatt.
Duke is one of five finalists for NVIDIA’s 2016 Global Impact Award. This $150,000 grant is awarded each year to researchers using NVIDIA technology for groundbreaking work that addresses social, humanitarian and environmental problems.
Two Steps Beyond Standard Practice
Standard practice for optical microsurgery is to send the patient for a pre-operation scan. This generates images that the surgeon uses to map out the disease and plan surgery. Post-operation, the patient’s eye is scanned again to make sure the operation was a success.
State-of-the-art microscopes go one step further. They use optical coherence tomography (OCT), an advanced imaging technique that produces 3D images in five to six seconds. Izatt’s work goes another step beyond that by taking complete 3D volumetric images, updated every tenth of a second and rendered from two different angles, resulting in a real-time stereoscopic display into both microscope eyepieces.
“I’ve always been very interested in seeing how technology can be applied to improving people’s lives,” said Izatt, who has been working on OCT for over 20 years.
His team is using our GeForce GTX TITAN Black GPU, CUDA programming libraries and 3D Vision technology to power their solution. Rather than having to do pre- and post-operation images to gauge their success, surgeons can have immediate feedback as they operate.
3D Images at Micrometer Resolution
A single TITAN GPU takes the stream of raw OCT data, processes it, and renders 3D volumetric images. These images, at a resolution of a few micrometers, are projected into the microscope eyepieces. CUDA’s cuFFT library and special function units provide the computational performance needed to process, de-noise, and render images in real time. With NVIDIA 3D Vision-ready monitors and 3D glasses, the live stereoscopic data can be viewed by both the surgeon using the microscope and a group observing the operation as it occurs—a useful training and demonstration tool.
“The current generation of OCT imaging instruments used to get this type of data before and after surgery typically takes about five or six seconds to render a single volumetric image,” said Izatt. “We’re now getting those same images in about a tenth of a second — so it is literally a fiftyfold increase in speed.”
Thus far, Izatt’s solution has been used in more than 90 surgeries at the Duke Eye Center and the Cleveland Clinic Cole Eye Institute. Out in the medical market, companies are still competing to commercialize real-time 2D displays. Izatt estimates his team’s 3D solution will be ready for commercial use in a couple years.
“The most complex surgeries right now are done in these big centers, but some patients have to travel hundreds or thousands of miles to go to the best centers,” said Izatt. “With this sort of tool, we’re hoping that would instead be more widely available.”
The winner of the 2016 Global Impact Award will be announced at the GPU Technology Conference, April 4-7, in Silicon Valley.
The video game industry continues to pave the way for advancements outside of entertainment. Now, a creative new partnership between technology and pharmaceutical giants is reimagining medical imaging, and with it, tackling an incurable and unpredictable central nervous system disease that affects 2.3 million people globally.
Microsoft recently teamed up with Novartis AG to develop AssessMS to treat those suffering from multiple sclerosis (MS). The new program, which uses the Microsoft Kinect’s motion-tracking and camera technology, allows researchers to analyze important data regarding the patient’s physical symptoms, such as gait and dexterity, by recording their movements.
Imprecise measurements and inconsistent assessments of patients’ movements currently complicate patients and doctors’ ability to evaluate the severity of MS symptoms and make informed choices about care and treatment options. These medical difficulties carry over into the pharmaceutical industry, making new drug trials challenging and costly.
Microsoft and Novartis believe their new program can change this calculus by offering more refined data. As patients perform simple body movements and gestures in front of the Kinect motion-sensing camera, doctors are able to evaluate precise information to evaluate the degree of impairment.
So far, prototypes have run hundreds of tests with patients in three of the top MS clinics in Europe. If the system shows promise, Novartis hopes to pursue the clinical validation process and seek regulatory approval.
This new system marks a break with previous games-based treatment options. Previously, doctors and patients primarily used games such as the Nintendo Wii Balance Board to help people with MS improve their balance. However, as several recent studies have found, games can help patients improve motor skills and visual acuity, sharpen short-term memory, reduce depressive symptoms, and relieve chronic pain, all difficulties common among people with MS.
“This is really super-interesting work,” Tim Coetzee, chief advocacy, services, and research officer at the National MS Society in the U.S. told Bloomberg. “The problem we are trying to solve in MS cries out for tools like this one where it is about being able to give the physician some consistent approach to measure the evolution of the disease.”
With these new advancements, the video game industry has the potential to revolutionize healthcare. As studies and tests continue to improve and evolve, researchers and game companies alike are working together to discover new treatments – even cures – to some of today’s biggest medical challenges.
Sea levels have traditionally been measured by marks on land – but the problem with this approach is that parts of the earth’s crust move too.
A group of researchers from Chalmers University of Technology in Sweden are using GPS receivers along the coastline in combination with reflections of GPS signals that bounce off the water’s surface. NVIDIA GPUs then crunch those data signals to compute the water level in real-time.
“Without the use of GPUs, we would not have been able to process all our signals in real-time,” said Thomas Hobiger, a researcher on the project.
This work has placed the team among the top five finalists for NVIDIA’s 2016 Global Impact Award which awards a $150,000 grant to researchers doing groundbreaking work that addresses social, humanitarian and environmental problems.
Using GPU-accelerated deep learning, researchers at The Chinese University of Hong Kong pushed the boundaries of cancer image analysis in a way that could one day save physicians and patients precious time.
Traditionally, pathologists diagnose cancer by looking for abnormalities in tumor tissue and cells under a microscope, but it’s a time-consuming process that is open to error.
The research team trained their deep convolutional neural network on a set of images of known abnormalities. They then used this training for segmenting individual glands from tissues to make it easier to distinguish individual cells, determine their size, shape and location relative to other cells. By calculating these measurements, pathologists can determine the likelihood of malignancy.
“Training with GPUs was 100 times faster than with CPUs,” said Hao Chen, a third-year Ph.D. student and member of the team that developed the solution. “That speed is going to become even more important as we advance our work.”
- Resume Full name Sayed Ahmadreza Razian Nationality Iran Age 36 (Sep 1982) Website ahmadrezarazian.ir Email ...
- معرفی نام و نام خانوادگی سید احمدرضا رضیان محل اقامت ایران - اصفهان سن 33 (متولد 1361) پست الکترونیکی email@example.com درجات علمی...
- Nokte – نکته نرم افزار کاربردی نکته نسخه 1.0.8 (رایگان) نرم افزار نکته جهت یادداشت برداری سریع در میزکار ویندوز با قابلیت ذخیره سازی خودکار با پنل ساده و کم ح...
- Tianchi-The Purchase and Redemption Forecasts 2015 Special Prize – Tianchi Golden Competition (2015) “The Purchase and Redemption Forecasts” in Big data (Alibaba Group) Among 4868 teams. Introd...
- Tianchi-Brick and Mortar Store Recommendation with Budget Constraints Ranked 5th – Tianchi Competition (2016) “Brick and Mortar Store Recommendation with Budget Constraints” (IJCAI Socinf 2016-New York,USA)(Alibaba Group...
- Drowning Detection by Image Processing In this research, I design an algorithm for image processing of a swimmer in pool. This algorithm diagnostics the swimmer status. Every time graph sho...
- 1st National Conference on Computer Games-Challenges and Opportunities 2016 According to the public relations and information center of the presidency vice presidency for science and technology affairs, the University of Isfah...
- Shangul Mangul HabeAngur Shangul Mangul HabeAngur (City of Goats) is a game for child (4-8 years). they learn how be useful in the city and respect to people. Persian n...
- 3rd International Conference on The Persian Gulf Oceanography 2016 Persian Gulf and Hormuz strait is one of important world geographical areas because of large oil mines and oil transportation,so it has strategic and...
- 2nd Symposium on psychological disorders in children and adolescents 2016 2nd Symposium on psychological disorders in children and adolescents 2016 Faculty of Nursing and Midwifery – University of Isfahan – 2 Aug 2016 - Ass...
- My City This game is a city simulation in 3d view. Gamer must progress the city and create building for people. This game is simular the Simcity.
- Optimizing raytracing algorithm using CUDA Abstract Now, there are many codes to generate images using raytracing algorithm, which can run on CPU or GPU in single or multi-thread methods. In t...
- Deep Learning for Computer Vision with MATLAB and cuDNN Deep learning is becoming ubiquitous. With recent advancements in deep learning algorithms and GPU technology...
- کودا – CUDA کودا به انگلیسی (CUDA) که مخفف عبارت انگلیسی Compute Unified Device Architecture است یک سکوی پردازش موازی و مد...
- Head-mounted Displays (HMD) Head-mounted displays or HMDs are probably the most instantly recognizable objects associated with virtual rea...
- AMD Ryzen Downcore Control AMD Ryzen 7 processors comes with a nice feature: the downcore control. This feature allows to enable / disabl...
- Unity – What’s new in Unity 5.3.4 The Unity 5.3.4 public release brings you a few improvements and a large number of fixes. Read the release not...
- Unity – What’s new in Unity 5.3.3 The Unity 5.3.3 public release brings you a few improvements and a large number of fixes. Read the release not...
- Using Machine Learning to Optimize Warehouse Operations With thousands of orders placed every hour and each order assigned to a pick list, Europe’s leading online fas...
- Automatic Colorization Automatic Colorization of Grayscale Images Researchers from the Toyota Technological Institute at Chicago and University of Chicago developed a fully aut...
- Real-Time Pedestrian Detection using Cascades of Deep Neural Networks Google Research presents a new real-time approach to object detection that exploits the efficiency o...
- Diagnosing Cancer with Deep Learning and GPUs Using GPU-accelerated deep learning, researchers at The Chinese University of Hong Kong pushed the boundaries...
- IBM Watson Chief Technology Officer Rob High to Speak at GPU Technology Conference Highlighting the key role GPUs will play in creating systems that understand data in human-like ways, Rob High...
- Unreal Engine 4.10 Release Notes This release brings hundreds of updates for Unreal Engine 4, including 53 improvements submitted by the commun...
- About CUDA – More Than A Programming Model The CUDA compute platform extends from the 1000s of general purpose compute processors featured in our GPU's c...
- Virtual Reality in the Military Virtual reality has been adopted by the military – this includes all three services (army, navy and air force)...
- Video Game Industry Stars Descend on Las Vegas for DICE 2016Last month, the 2016 D.I.C.E. Summit brought together video game …
- Deep Learning Helps Robot Learn to Walk the Way Humans DoUniversity of California, Berkeley researchers are using deep learning and …
- Real-Time Pedestrian Detection using Cascades of Deep Neural NetworksGoogle Research presents a new real-time approach to object detection …
- New Deep Learning Method Enhances Your SelfiesResearchers from Adobe Research and The Chinese University of Hong …
- Epic Games Unveils ProtoStar at Samsung Galaxy UnpackedEpic Games has revealed ProtoStar, a real-time 3D experience built …
- Cleaning Up Radioactive Waste from World War II With SupercomputingThe Handford site in southeastern Washington is the largest radioactive …
- Scientists Gather at University of Delaware for OpenACC HackathonOak Ridge National Lab, NVIDIA and PGI launched the OpenACC …
- NVIDIA GPUs Power First Self-Driving ShuttleThe six-passenger WEpod shuttle became the world’s first vehicle without …
- Automatic Colorization Automatic Colorization of Grayscale ImagesResearchers from the Toyota Technological Institute at Chicago and University …
- 12 Startups Vying for $100,000 at GPU Technology ConferenceEach startup will be given four minutes to present their …