Personal Profile

virtual reality

Facebook Donating 200 GPUs to European Researchers

Facebook Donating 200 GPUs to European Researchers

Facebook Donating 200 GPUs to European Researchers

The Facebook Artificial Intelligence Research (FAIR) lab announced a new Research Partnership Program to spur advances in Artificial Intelligence and machine learning — Facebook will be giving out 25 servers powered with GPUs, free of charge.

The first recipient to receive 32 GPUs in four GPU servers is Klaus-Robert Müller of TU Berlin. “Dr. Müller will receive four GPU servers that will enable his team to make quicker progress in two research areas: image analysis of breast cancer and chemical modeling of molecules,” FAIR engineering director Serkan Piantino and research lead Florent Perronnin wrote in a blog post.

Facebook will supply server recipients with all of the necessary software to run the servers and will also send some of its own researchers to various institutions for collaboration.

Facebook CEO Mark Zuckerberg made the announcement from Berlin in a live video feed on his Facebook page – he also discussed the latest in AI with Yann LeCun, who leads Facebook AI Research.

ch.

Click on the image above to watch the fireside chat about the announcement.Facebook GPU fireside chat.

Click on the image above to watch the fireside chat about the announcement.Facebook GPU fireside chat.

“VR is going to need 10 years to become a very mainstream big thing,” Zuckerberg said during the live video feed. “But we’re committed to this. We have the resources to be able to invest and use these investments across the world to bring the research community into this.”

Unity – What’s new in Unity 5.3.4

Unity - What's new in Unity 5.3.4-Logo

Unity – What’s new in Unity 5.3.4-Logo

The Unity 5.3.4 public release brings you a few improvements and a large number of fixes. Read the release notes below for details.

For more information about the previous main release, see the Unity 5.3 Release Notes.

IMPROVEMENTS

  • Android: Audio; don’t select OpenSL output if the native device params are too bad for fast path (fixes audio issues on some buggy devices).
  • Android: Buildpipe; updated SDK tools requirements for the Editor.
  • Android: Editor; added Marshmallow to the list of APIs.
  • Android: IL2CPP; use Android NDK x64 on x64 Windows Editor.
  • Android: Soft Input; get rid of hardcoded text color, switch to Light theme.
  • Editor: Added warning dialog if there is any version difference between editor and last project save.
  • Metal: Add -force-metal switch to force Metal rendering on OSX/iOS.
  • OpenGL Core: A whole bunch of fixes, particularly on Macs. See Fixes list below.
  • Scripting: introduced global define UNITY_5_3_OR_NEWER, which can be used for conditionally compile code that is compatible only with Unity 5.3 or newer.
  • Win / OSX Standalone: Add -hideWindow command line option to launch standalone applications with the window hidden.
  • Windows: Added a new command line argument for standalone builds: -window-mode. Options: borderless, exclusive. It lets users override the default fullscreen window behavior.

 

Unity - What's new in Unity 5.3.4

Unity – What’s new in Unity 5.3.4

FIXES

  • 2D: Changing Rigidbody2D.gravityScale while a Rigidbody2D.MovePosition is in progress now works. (762771)
  • 2D: Ensure Joint2D auto-configuration still works if joint is added from script (765870)
  • 2D: Ensure that a TargetJoint2D added via script allows collisions with static colliders. (763922)
  • 2D: Stop Rigidbody2D with Interpolation being placed at world origin for a single update upon start. (764769)
  • Analytics: Fixed unreliable event sending (especially AppStart) on WebGL. (770316)
  • Android: Added support for Vivante OpenGL ES 3 driver. (738821, 758155)
  • Android: Fixed alignment crash on some Android ARM devices. (768974)
  • Android: Fixed an issue where Ping wouldn’t work in release mode. (734124)
  • Android: Fixed black screen or crash during startup on old PVR devices (Samsung Galaxy S I9000). (762875)
  • Android: Fixed bug in Texture.GetPixels for ETC compressed textures. (759195)
  • Android: Fixed rendering artifacts when using native plugins and multithreaded renderer. (772171)
  • Android: Fixed rendering on Vivante GPUs on Android 4.3 and older. (712890, 771890)
  • Android: IL2CPP; fixed build errors on NDK paths with whitespaces. (763447)
  • Android: IL2CPP; fixed crash on second startup after installation. (766698)
  • Android/IL2CPP; prevent a crash in the garbage collector when it attempts to scan a section of memory used for the code that may have been unmapped by the OS. (755201)
  • Animation: Fixed a crash when animating lights using the legacy Animation component. (772260)
  • Animation: Prevent crashes when clips are null for animations extracted from asset bundles whose dependencies have not loaded. (756463)
  • API Updater: Fixed “Sequence contains more than one matching element” crash. (760684)
  • API Updater: Fixed crash upon assembly resolution failures. (743463)
  • Asset Bundles: Only reimport when setting asset bundle name if cache server is connected. (714661)
  • AssetBundles: BuildAssetBundles will switch back to the original active build target when finished. (759142)
  • Audio: Disabled sound manager watch dog. (774356)
  • Core: Fixed an issue asynchronously loading a prefab with a large amount of assets. (771882)
  • Core: Fixed some errors with recently deleted objects (in WWWDelayCall; ClearPersistentDirty call; editor CEF integration).
  • Editor: Don’t call OnLevelWasLoaded on the first scene when entering play mode. (759231)
  • Editor: Fix for clustering allocation while navigating. (747856)
  • Editor: Fixed a crash when selecting some prefabs. (766469)
  • Editor: Fixed an issue that made GameObjects disappear from the Editor if they have an associated editor script that made use of DontDestroyOnLoad. (754127)
  • Editor: Fixed an issue when opening a scene from the Project Browser while in playmode it resulted in that scene being loaded even after going out of playmode. (767728)
  • Editor: Fixed misleading texture decompression warning in graphics emulation. (760112)
  • Editor: Fixed startup when Unity is in a path with non-ASCII characters. (765159)
  • Editor: Fixed the issue of GUI.Windows background not being tinted by GUI.colors anymore. (756004)
  • Editor: Fixed the issue of marking scene dirty when creating prefab by dragging from Hierarchy window to Project. (758409)
  • Editor: Fixed the issue of marking the scene dirty when pressing the apply button on a prefab instance. (757027)
  • Global Illumination: Fixed crash in some scene loading scenarios. (768849)
  • Global Illumination: Fixed crash when building lighting with a specific scene setup. (767222)
  • Global Illumination: Fixed light probes not being used anymore in Standalone when a scene without light probes was loaded with Additive mode. (767161)
  • Global Illumination: Fixed multi-scene baking. (751599)
  • Global Illumination: Fully repaint inspectors after baking reflection probes; some previews were not updating before. (663992)
  • Global Illumination: When compositing the directional lightmap removed the clamping on the w-component of the generated pixels.
  • Graphics: Added profiler markers on async texture loading waits.
  • Graphics: Fixed .ogv movie files with stream markers beyond 16kb not imported correctly. (772013)
  • Graphics: Fixed “Trying to reload asset from disk that is not stored on disk” error when non-persistent objects are attempted to be reloaded from disk. (752613)
  • Graphics: Fixed a synchronization problem that was causing texture data not to be properly updated when changing quality settings at runtime. (752613)
  • Graphics: Fixed an issue where GrabPass could get source texture wrongly offset in some cases. (75508, 726067)
  • Graphics: Fixed an issue where setting a material’s shader to null would crash the editor. (771292)
  • Graphics: Fixed an issue where TrailRenderer would randomly vanish/flicker. (740580)
  • Graphics: Fixed crash in SetGpuProgramName which could happen when the program isn’t supported by the target graphics hardware (found on Android) . (772958)
  • Graphics: Fixed GenerateSecondaryUVs crashes on some meshes.
  • Graphics: Fixed MovieTextures sometimes being black in Mac Standalone (64 or Universal builds). (765928)
  • Graphics: Prevent Projectors from accepting invalid clip planes from a script. (506089, 535548)
  • Graphics/DX11: Fixed compute shader resource hazards found in certain cases when binding the same resource SRV and UAV on pixel and compute shader stages. (542251)
  • IL2CPP: Avoid crash when constructing error message. (770081)
  • IL2CPP: Correctly sort unsigned integers via the Array.Sort method. (774085)
  • IL2CPP: Prevent generated C++ code from failing to compile with errors like “error: use of undeclared identifier ‘L_5′” in some cases. (773713, 768010)
  • IL2CPP: Properly marshal formatted classes (762883, 746313)
  • IL2CPP: Properly parse binary text assets. (771835)
  • iOS Metal: Fixed performance regression when doing in-frame clear (GL.Clear or command buffer Clear). (775362)
  • iOS: Fixed redirect for WWW. (723960)
  • Linux: Fixed MSAA in non-upscaled windows, force window recreation when requirements change for player window attributes.
  • Linux/GLCore: Fixed one more instance of render. (775575)
  • Mecanim: Fixed an issue with Animation clip length for bundled clip. (753888)
  • Mecanim: Fixed AnimationClip.SampleAnimation memory leak. (760612)
  • Mecanim: Fixed Animator with statemachine behaviour runtime compile error not firing callback on the right SMB. (756129)
  • Mecanim: Fixed assert when using Animator.MatchTarget.
  • Mecanim: Fixed long start play mode for scene with lots of controller. (769964)
  • Mecanim: Fixed StateMachineBehaviours on layer not being called properly. (765141)
  • Mono: Added IPv6 support on Windows. (767741)
  • Networking: Fixed a crash due to wrong initialization of connection.
  • Networking: Fixed an issue where ack didn’t reset with connection resetting which lead to reliable traffic stale. (775226)
  • Networking: Removed annoying “Attempt to send to not connected connection” message. (775222)
  • Networking: Removing “no free events for message” as annoying. (775225)
  • Networking: Send Error: val > 0 on user disconnect, results in memory write violation and editor crash. (754510)
  • OpenGL Core: Fixed twitching and incorrect rendering with skinning and UI components on GLCore + Mac + NVIDIA. (773476, 775275, 767857, 766778)
  • OpenGL Core: Fixed fullscreen mode when not using native resolution and using MSAA on Mac AMD GPUs. (775428, 776470)
  • OpenGL Core: Fixed fullscreen MSAA support with linear color space rendering. (774558, 774216)
  • OpenGL Core: Fixed Graphics API switching to OpenGL. (762687)
  • OpenGL Core: Fixed occasional game view flipping with image effects in the editor. (760196)
  • OpenGL Core: Fixed dynamic geometry performance issues on Mac + NVIDIA.
  • OpenGL Core: Fixed stretched game view with some image effects in Mac editor. (757536, 757866)
  • OpenGL Core: Workaround for Nvidia shader compiler bug on OS X, affecting SSAO shader. (756028)
  • OpenGL Core/ES: Fixed scalar uniform handling in the shader translator. (772167)
  • OpenGL Core/ES: Fixed wrong shader code generation when redirecting variables, was affecting FastBloom shader. (772434)
  • OpenGL Core/ES: Shader compiler, fixed invalid uniform access in certain corner cases. (767343)
  • OpenGL ES: Fixed non-shadowmap depth textures on some devices. (768916)
  • OpenGL: Fixed point size support using GLSL PROGRAM snippets. (763875)
  • Particles: Fixed a culling regression, when particle systems leave the screen and come back. (773673)
  • Particles: Fixed error message due to default bounding box. (767419)
  • Physics: Fixed a PhysX crash issue in PxsCCDContext::updateCCD experienced by some VR applications. (776187)
  • Profiler: Fixed crash when adding data from thread which was started during a frame. (758264)
  • Profiler: Fixed hang when EndSample did not have a matching BeginSample. (770225)
  • Shaders: Added support to the compute shader compiler to handle bools inside structures.
  • Shaders: Fixed Standard shader in some rare cases outputing NaN as pixel shader result. (766806)
  • Shadows: Fixed shadows disappearing for some off-screen shadows casters. (761152)
  • Substance: All inputs are now applied to a ProceduralMaterial on the first RebuildTextures() call after the material’s textures have been read from cache. Previously, only the modified inputs were applied.
  • Substance: Fixed Editor freeze upon instantiation of resource. (771650)
  • Substance: Fixed broken detection and assignment of shader keywords resulting in wrong appearance of ProceduralMaterials in scenes when the ProceduralMaterial was not opened in the Inspector (some shader keywords were enabled when they should not).
  • Substance: Fixed cache hashing and management issue which could cause the cache to be considered valid again after having been invalidated (often seen after calling Resources.UnloadUnusedAssets()).
  • Substance: Fixed loading of files with special characters in their paths or names.
  • Substance: Fixed rare crash caused by using the wrong size when uploading ProceduralTextures.
  • Substance: Fixed upgrading UNITY 4.x project data, legacy shaders should now be used for these old projects instead of being incorrectly replaced by the Standard shader. (765510)
  • Tizen: Fixed error about copying whitelists while building on Windows. (773614)
  • UI: Fixed exceptions upon assembly/type resolution failures. (770048)
  • UI: Fixed memory leak where dirty renderers on a disabled canvas would still get added to the dirty list causing crashes on clear. (773533)
  • UI: Fixed sorting issue where gird based depth sorting would fail to recognize overlapping unbatchable items. (770804)
  • UI: Stopped raycast from traversing up the hierarchy when a canvas with override sorting is encountered.
  • UI: Vertical alignment of text sometimes appearing higher than expected. (760753)
  • Unity Ads: Updated to 1.5.6 version (fixes crash on Android 4.1 and earlier).
  • VR: Restart the VR Device if the Oculus Service fails.
  • Wii U: Fixed a crash on secondary error confirmation. (767206)
  • Windows Store: Fixed exported VS project failing to build for non-x86 CPU when there is a managed assembly in the project that’s been compiled for x86. (770931)
  • Windows Store/IL2CPP: Fixed game crashing when its name is exactly 20 characters long. (769835)
  • Windows Store/IL2CPP: Fixed intellisense in generated Il2CppOutputProject to be able to correctly resolve Windows 10 headers. (771765)
  • Windows Store/IL2CPP: Fixed failing to build on top of a previous build when target directory is read-only. (766764)
  • Windows Store/IL2CPP: Graphics plugins now work. (770941)
  • Windows Store/IL2CPP: Reduced the amount of linker warnings when building.
  • Windows: Fixed Application.persistentDataPath when Product Name contains invalid path character. (756152)
  • Windows: Fixed unnecessary symbols exported for Windows Standalone Player executable. This was making some Nvidia drivers wrongly pick up integrated GPU instead of discrete one on some systems.
  • XBoxOne/IL2CPP: Allow a call to Guid.NewGuid to work correctly. (769711)

 

Simulating Real-World Floods on GPUs

Simulating Real-World Floods on GPUs

Simulating Real-World Floods on GPUs

Flood risk assessment is important in minimizing damages and economic losses caused by flood events.

A team of researchers from Vienna University of Technology and visual computing firm VRVis, are using GPUs to run fast simulations of large-scale scenarios, including river flooding, storm-water events and underground flows.

The researcher’s primary interest is in decision-making systems, where they evaluate many different scenarios and select the solution with the best outcome, which is usually very computationally expensive. Therefore, simulation runs need to be as fast as possible to reduce the overall time required to find the best solutions.

Uncertainty-aware prediction of mobile flood protection wall overtopping in Cologne. (a) Input hydrographs forming an ensemble of 10 different scenarios with varying peak levels. (b, c) Visualization of ensemble results. Buildings are colored according to the expected damage. The terrain is colored according to the average water depth.

Uncertainty-aware prediction of mobile flood protection wall overtopping in Cologne. (a) Input hydrographs forming an ensemble of 10 different scenarios with varying peak levels. (b, c) Visualization of ensemble results. Buildings are colored according to the expected damage. The terrain is colored according to the average water depth.

In their real-world test case, the researchers used CUDA and a  GTX TITAN GPU to simulate the overtopping of mobile flood protection walls in Cologne, Germany. The overtopping happens when the water in the Rhine River raises above 11.9 meters.

Virtual Reality Therapy for Spider Phobia

Virtual Reality Therapy for Spider Phobia

Virtual Reality Therapy for Spider Phobia

Phobics typically panic or become anxious when they encounter the object or situation that makes them afraid, even though they know the object or situation (e.g., a small house spider) is not that dangerous. Such unrealistic or excessive fears of objects or situations is a psychological disorder that can makes life miserable for years. Exposure therapy has proved effective for many different types of phobias, including spider phobia. Exposure therapy is a clinical treatment based on gradually and systematically exposing the phobic person to the feared object or situation a little at a time, starting very slowly, and calming them. Little by little their fear decreases and they become more comfortable with spiders. They will probably always be a little creeped out by spiders, but therapy can train them not to panic. After treatment, most “former phobics” start living life more fully. Success overcoming their fear can lead to increased self-confidence, which in turn often has other positive benefits.

In vivo exposure therapy, is a combination of cognitive psychology and behavioral therapy (Cognitive-Behavioral therapy, which is not Freudian). People are taught to think a little differently when they think about spiders (this is the cognitive part of the therapy, where cognition means “thinking”). In addition, during treatment, phobics are deconditioned using stimulus-response learning (and unlearning). This is the Behavioral part of the treatment.

Miss Muffet demonstrates she is no longer afraid of real spiders after VR therapy (a scene from SPIDERS!).

Miss Muffet demonstrates she is no longer afraid of real spiders after VR therapy (a scene from SPIDERS!).

Pavlov, an early behaviorist, paired a stimulus (a bell) with the presence of food. Every time the dog heard a bell, they got some food. After doing this enough times, the dogs started to associate the bell with the food. The dogs would start salivating when they heard the bell, even if there was no food present. It is believed that spider phobia is due at least in part to a similar Stimulus-Response association. The spider, the stimulus, evokes a response, fear and anxiety. Every time the phobic runs in fear from a spider, it strengthens or at least helps maintain this association. Avoidance feed phobias.

Did you know that the Stimulus-Response conditioning can be reversed? A dog that has been conditioned to salivate when they hear a bell can be untrained! If you ring a bell without presenting food a bunch of times, pretty soon the association or link, the Stimulus-Response association between bell and food disappears, and the dog no longer salivates (or salivates much less) when they hear the bell. The behavioral therapy part of VR exposure therapy uses a similar approach to treat spider phobics. With in vivo (in life) exposure therapy, under a therapist’s supervision and guidance, rather than avoiding it, the phobic slowly approaches the thing they are afraid of in the real world…the phobics initially display a rapid increase in anxiety, but if they hold their ground instead of fleeing, their fear and anxiety will actually habituate. They stop sweating, their heart rate slows down, they feel less anxiety, even though they are standing fairly near a spider! It is as if their nervous systems start to get bored with the spider. During this phase of in vivo exposure therapy when their anxiety is going down in the presence of the live spider (e.g., a tarantula in a terrarium), they are reversing the Stimulus-Response association. The stimulus-response association is sort of “cancelled out” by the new association between the presence of the spider, and a DROP in anxiety! That is, they start to associate a spider with becoming LESS anxious. In addition, the therapist explains a number of things to the patient, and helps the patient think differently about spiders. This helps the phobic think differently about their own anxiety (this is the cognitive portion of the treatment). It will be a few years before VR exposure therapy is more widely available. While it is possible to go get treated with VR right now, you may not be able to go al the way to Los Angeles to get it.

On the West Coast, call BrendaWiederhold, MBA, Ph.D., at the Calif School of Professional Psychology in San Diego, CA for more information about getting VR exposure therapy (858) 623-2777, Ext. 415 Bwiederhold@cspp.edu, http://www.vrphobia.com

On the East Coast (Manhattan), contact JoAnn Difede, Ph.D., Director of The Program for Anxiety and Traumatic Stress Studies at Weill Cornell Medical College’s Department of Psychiatry.   http://www.patss.com/

In the Seattle area, Brian Neville, Ph.D., LLC is now using a number of techniques including VR exposure therapy to treat a number of phobias (private clinical practice in Woodenville, WA). http://home.covad.net/~neville1959/index.htm, 18500 156th Ave NE, Suite 202, Woodinville, WA  98072

425-481-5700, ext. 10#

or to find treatment providers worldwide go to http://www.virtuallybetter.com/

So, if you can’t find a place to get VR therapy, keep in mind that “in vivo” exposure therapy works really well too, and the Psych dept (preferably the clinical psychology, or Psychiatry dept of a University near your city can refer you to a good Cognitive-Behavioral therapist.

Claustrophobia, fear of heights, fear of spiders, fear of cats, fear of dogs, fear of driving, fear of flying, fear of public speaking are common examples of specific phobias (there are numerous other examples). Cognitive-behavioural therapy for specific phobias with in vivo exposure therapy has a VERY high success rate, and typically takes 12 hours or less, a very small amount of time, considering how long some other problems take. People with phobias rarely seek treatment for their problem! Most just limp through life avoiding the thing they are afraid of, in constant fear of being discovered. In other words, many phobics are afraid of the embarassing panic attack they will have, i.e., how bizarre they will act if they happen to run into a spider. Some are more worried about this over-reaction and the social consequences than they are about the spider itself.

In some situations, fears can be dangerous. For example, if the person nearly wrecks their car when a spider drops in their lap out of a visor, or when a patient who needs a brain scan can’t go into the confined brain scanner because of claustrophobia. Over 80% of people who seek Cognitive-Behavioral therapy (e.g., in vivo exposure therapy) for their phobias no longer panic when they encounter a spider, and this typically holds true indefinitely (ie., people tend to remain cured once successfully treated).

Despite the fact that this type of treatment is so fast and effective, only a small proportion of spider phobics ever actually seek treatment for their problem. The reason is fairly obvious. Understandably, THEY DON’T WANT TO GO ANYWHERE NEAR A LIVE SPIDER, EVEN FOR THERAPY.

Virtual reality to the rescue

In collaboration with others, Barbara Rothbaum (a clinical psychologist from Emory) and Larry Hodges (a computer science expert from George Tech) were the principle investigators in the first published Journal study on using VR exposure therapy for treating a phobia (fear of heights, see http://www.cc.gatech.edu/gvu/virtual/). This was followed by a publication about using of VR for treating spider phobia by our research group at the HITLab (Carlin and Hoffman at the University of Washington in Seattle, who have recently been working with Azucena Garcia and Christina Botella from Spain, (more about this later).

Since then, down in Atlanta Georgia, Rothbaum and Hodges have had great success using VR exposure therapy to treating fear of flying, and they have ambitiously had some encouraging preliminary results treating post-traumatic stress disorder in Vietnam Vets, a disorder notoriously difficult to help (unlike phobias, which are easy to treat quickly and successfully). Hodges and Rothbaum are currently exploring the use of VR for treating fear of public speaking as well. Several of the virtual worlds developed by Hodges and Rothbaum are now commercially available for clinicians interested in using VR exposure therapy with their patients (see http://www.virtuallybetter.com/).

Brenda Wiederhold has spearheaded the creation of a treatment center in Southern California. She and her colleagues have treated over 100 phobics with Hodge’s virtual reality exposure therapy software. Brenda Wiederhold’s group presently treats fear of heights and fear of flying and a number of other problems.

Albert Carlin and Hunter Hoffman published the second Journal paper on VR exposure therapy. At the suggestion of one of Dr. Carlin’s patients, Al and Hunter extended Rothbaum and Hodges idea of using immersive virtual reality for exposure therapy to a new type of fear: spider phobia. It was actually the idea of Miss Muffet, the first patient they treated together. Prior to treatment, Miss Muffet had been clinically phobic for nearly 20 years and had acquired a number of spider-related obsessive-compulsive behaviors. She routinely fumigated her car with pesticides and smoke to get rid of spiders. She sealed all bedroom windows with duct tape each night after scanning the room for spiders. She was hypervigilant, searching for spiders wherever she went, and avoiding walkways where she might find one. After washing her clothes, she immediately put her clothing inside a sealed plastic bag, to make sure it remained free of spiders. Over the years, her condition became worse. When her fear made her hesitant to leave home (a very extreme phobia), she finally sought therapy.

Researcher Hunter Hoffman, U.W. holding a virtual spider near the face of  a patient as part of virtual reality phobia exposure therapy to reduce fear of spiders.  In the immersive virtual world called SpiderWorld, patients can reach out and touch a furry toy spider, adding tactile cues to the virtual image, creating the illusion that they are physically touching the virtual spider.  Tactile augmentation was shown to double treatment effectness compared to ordinary VR.  photo Mary Levin, U.W., with permission from Hunter Hoffman, U .W. (Picture on right)…..An image of what patients see (in 3-D) in SpiderWorld… as they grab a wiggly legged virtual tarantula.

During the 12, one hour VR therapy sessions at the U.W. Human Interface Technology laboratory (HITLab) in Seattle, Miss Muffet started very slowly. First she stodd completely across the virtual world from the virtual spider. Slowly she got a little closer, her progress closely monitored by Al and Hunter who watched what she was seeing in VR, which was also displayed to them on a compute monitor. In later sessions, after she had lost some of her fear of spiders, she was sometimes encouraged to pick up the virtual spider and/or web with her cyberhand and place it in orientations that were most anxiety provoking. Other times, the experimenter controlled the spider’s movements (unexpected jumps, etc). Some virtual spiders were placed in a cupboard with a spiderweb. Other virtual spiders climbed or dropped from their thread from the ceiling to the virtual kitchen floor. Eventually, after getting used to them, Miss Muffet could tolerate holding and picking up the virtual spiders without panicking. She could pull the spider’s legs off (initially this occurred accidently, and then deliberately at the experimenter’s request). A large brown virtual spider with photograph-quality texture-mapped fur (made by Scott Rousseau and Ari Hollander, see www.imprintit.com), and later re-made with animations by Duff Hendrickson), and a smaller black spider and an associated 3-D web were employed (by far the best spider (just kidding) was the one Hunter made, virtual black widow spider, which reminded Miss Muffet of the spiders she saw in her nightmares described next). The black one was flawed in that it was possible to pull the virtual legs off, if one grabbed it right. This turned out to be good.

After only two, one-hour Virtual Reality exposure therapy sessions, Miss Muffet was noticing some very important progress. For example, prior to VR treatment, she had a recurring nightmare about spiders (very scary). After her second VR exposure session, she had her nightmare again that night, but it was no longer scary. In fact, in her dream, she was able to talk to the spiders for the first time, and scolded them for scaring her. “Don’t feel bad lady, we scare everyone”, said their cigar smoking thug leader in her dream. “Well STOP IT” she told them in her dream. The magic spell the spiders had on her was broken by her recent VR exposure therapy. Really, the truth is, the magic spell that SHE had on HERSELF was broken. VR allowed her to reverse the spell she had somehow cast on herself earlier in life, without intending to. When she came in for her third one-hour VR treatment session, there was a sparkle in her eye. She could tell she was making progress, and that gave her confidence and bravery and made her hungry to finish the job of curing herself. After several more one-hour VR sessions over several weeks (one treatment per week for three months total), she reported to us that she had had the nightmare yet again, but this time, the spiders in her dream were gone…only cobwebs remained. This routine with the dreams may only happen with this one patient, its hard to predict, but it was very interesting to us. As a psychologist interested in how the human mind works, this experience treating spider phobics with VR has been fascinating for me (Hunter).

Toward the end of Miss Muffet’s therapy (e.g., after about nine, one-hour sessions), Al Carlin and Hunter started running out of new tricks to use to evoke anxiety from Miss Muffet. Miss Muffet reached out with her cyberhand in the virtual world to touch the virtual spider, but contrary to her earlier panic reactions, she had only a little anxiety now, since she had gotten used to grabbing the virtual spider.

Researcher Hunter Hoffman, U.W. holding a virtual spider near the face of a patient as part of virtual reality phobia exposure therapy to reduce fear of spiders.

Researcher Hunter Hoffman, U.W. holding a virtual spider near the face of a patient as part of virtual reality phobia exposure therapy to reduce fear of spiders.

In order for therapeutic progress to continue, Hunter and Al had to come up with some new spider behaviors or new spider-related experiences that would initially evoke an anxiety response, so they could continue to habituate Miss Muffet. They tapped a technique called mixed reality Hunter had been studying in some other VR research. One wierd thing about virtual objects is…they are typically only visual illusions, when you reach out to touch a virtual spider, your cyberhand goes right through the spider. If you reach out to touch a virtual wall, typically your virtual hand sticks right through the wall like something from a Sci Fi movie. This quality of non-solidity is interesting and fun, but it detracts from VR’s realism. To give the virtual spider solidity and weight (cyberheft), Hunter rigged up a furry toy spider with a bad toupe, such that when Miss Muffet reached out to touch the virtual spider in the virtual world, her real hand simultaneously touched the furry toy spider in the real world! Although we told her it was coming, Miss Muffet was quite surprised when she had the illusion of physically touching the virtual spider. Suddenly, the virtual spider she had grown accustomed to touching without anxiety (i.e, during therapy), now evoked a huge anxiety response. But…as predicted, Miss Muffet even got used to this “mixed reality” spider. It is called mixed reality because it was part virtual …the visual animated spider in VR, and part real, the tactile cues from the real toy spider. See the following papers for more info on Hunter’s research on tactile augmentation or mixed reality at www.hitl.washington.edu/people/hunter/).

According to Miss Muffet, this extraordinary experience/illusion of physically groping the plump furry body of a Guyana bird-eating tarantula was a big turning point. She said after she had gotten over the anxiety that evoked, she was largely cured. After holding that virtual beast, an ordinary real spider in her real kitchen was not scary at all. A subsequent controlled experiment with 36 participants showed that Miss Muffet was right….exposure therapy culminating in the handling of a mixed reality spider increased therapeutic effectiveness compared to the same therapy without any mixed reality (e.g., with only virtual spiders that couldn’t be physically touched). See Hoffman, Garcia-Palacios, Carlin, Furness, III, and Botella, (2003).

Garcia-Palacios, A, Hoffman, HG, Kwong See, S, Tsai, A, Botella-Arbona, C. Redefining therapeutic success with VR exposure therapy. CyberPsychology and Behavior 2001;4:341-8.

Hoffman, HG, Garcia-Palacios, A, Carlin, C, Furness, TA III, Botella-Arbona, (2003). Interfaces that heal: Coupling real and virtual objects to cure spider phobia. International Journal of Human-Computer Interaction, 2003;16:283-300.

During the course of therapy the patient could also squash the virtual spiders with a mixed-reality ping pong paddle. These interactions in VR caused her great anxiety, including trembling, sweating, and dryness of mouth, and feeling on the verge of tears.

Prior to VR treatment, the patient filled out a fear-of-spiders questionnaire. A sample of 280 undergraduate psychology students filled out the same questionnaire as a comparison group. The undergrads received no treatment and gave their ratings only once. Initially, only one undergraduate had a higher fear-of-spiders score than the patient. After 12 weekly one-hour desensitization treatments for the patient, 29% (80 students) had higher fear of spiders scores than the patient.

The results are very encouraging. Importantly, this dramatic reduction in the patient’s fear of spiders is also reflected in the patient’s behavior in the real world. She stopped engaging in obsessive-compulsive spider rituals, and can now interact with real spiders with moderate but manageable emotion. Her improvement is so profound that she has time for new hobbies such as camping outdoors, something she would never have dreamed of doing prior to therapy. In fact, to her amazement, the story came full circle. Miss Muffet became the star of a Scientific American Frontiers program on SPIDERS! on PBS that featured the SPIDERWORLD developed by Hoffman and Carlin. She is shown at the top of this webpage, holding a real tarantula. (don’t do this at home). You can watch this free educational science documentary digital video clip about our use of virtual reality exposure therapy to treat Miss Muffet at PBS by clicking HERE (once at PBS, be sure to scroll down to the digital video story called “arachnophobia”.

She is the first spider phobia patient to be cured using immersive VR therapy. This case study (Carlin, Hoffman and Weghorst, 1997) provides converging evidence to the growing literature showing the effectiveness of VR for medical applications. We have since treated about 20 clinical phobics with a success rate of approximately 85% at the HITlab and continue to conduct research on this interesting topic. See Garcia-Palacios, A, Hoffman, HG, Carlin, C, Furness, TA III, Botella-Arbona, (2002). Virtual reality in the treatment of spider phobia: A controlled study. Behaviour Research and Therapy, 2002;40:983-993.

Rothbaum and Hodges were first, Carlin and Hoffman were second to publish, and…Botella and colleagues from Spain were the third group to publish a case study on using immersive VR exposure therapy for treating phobia. Interestingly, all three groups published in the journal named Behavioral Research and Therapy. Botella et al. created a VR treatment for claustrophobia, fear of enclosed spaces. Part of this treatment involves going into a fairly large virtual room. The patient controls the walls of this room, which close in on the patient in VR. As the walls close in, they make a noise like concrete scratching on concrete. Claustrophobia is a big problem for some people who need to have a brain scan but can’t bear to go into the brain scanner. Botella and colleagues are also having success using VR to treat severe anorexia. Botella’s active group in Spain (which includes Azucena Garcia-Palacios and several other talented clinical psychologists) is quickly becoming one of the top centers in the world for research on VR treatments for Psychological disorders.

Virtual Reality in Healthcare

Virtual Reality in Healthcare

Virtual Reality in Healthcare

Healthcare is one of the biggest adopters of virtual reality which encompasses surgery simulation, phobia treatment, robotic surgery and skills training.

One of the advantages of this technology is that it allows healthcare professionals to learn new skills as well as refreshing existing ones in a safe environment. Plus it allows this without causing any danger to the patients.

Human simulation software

One example of this is the HumanSim system which enables doctors, nurses and other medical personnel to interact with others in an interactive environment. They engage in training scenarios in which they have to interact with a patient but within a 3D environment only. This is an immersive experience which measures the participant’s emotions via a series of sensors.

Virtual reality diagnostics

Virtual reality is often used as a diagnostic tool in that it enables doctors to arrive at a diagnosis in conjunction with other methods such as MRI scans. This removes the need for invasive procedures or surgery.

Virtual robotic surgery

A popular use of this technology is in robotic surgery. This is where surgery is performed by means of a robotic device – controlled by a human surgeon, which reduces time and risk of complications. Virtual reality has been also been used for training purposes and, in the field of remote telesurgery in which surgery is performed by the surgeon at a separate location to the patient.

The main feature of this system is force feedback as the surgeon needs to be able to gauge the amount of pressure to use when performing a delicate procedure.

But there is an issue of time delay or latency which is a serious concern as any delay – even a fraction of a second – can feel abnormal to the surgeon and interrupt the procedure. So there needs to be precise force feedback in place to prevent this.

Robotic surgery and other issues relating to virtual reality and medicine can be found in the virtual reality and healthcare section. This section contains a list of individual articles which discuss virtual reality in surgery etc.

More Examples of Virtual Reality and Healthcare

This section looks at the various uses of VR in healthcare and is arranged as a series of the following articles:

  • Advantages of virtual reality in medicine
  • Virtual reality in dentistry
  • Virtual reality in medicine
  • Virtual reality in nursing
  • Virtual reality in surgery
  • Surgery simulation
  • Virtual reality therapies
  • Virtual reality in phobia treatment
  • Virtual reality treatment for PTSD
  • Virtual reality treatment for autism
  • Virtual reality health issues
  • Virtual reality for the disabled

Some of these articles contain further sub-articles. For example, the virtual reality in phobia treatment article links to a set of articles about individual phobias, e.g. arachnophobia, and how they are treated with this technology.

Most of us think of virtual reality in connection with surgery but this technology is used in non-surgical ways, for example as a diagnostic tool. It is used alongside other medical tests such as X-rays, scans and blood tests to help determine the cause of a particular medical condition. This often removes the need for further investigation, such as surgery, which is both time consuming and risky.

Augmented reality is another technology used in healthcare. If we return to the surgery example; with this technology, computer generated images are projected onto the part of the body to be treated or are combined with scanned real time images.

What is augmented reality? This is where computer generated images are superimposed onto a real world object with the aim of enhancing its qualities. Augmented reality is discussed in more detail as a separate section.

Unity – What’s new in Unity 5.3.3

Unity - What's new in Unity 5.3.3-Logo

Unity – What’s new in Unity 5.3.3-Logo

The Unity 5.3.3 public release brings you a few improvements and a large number of fixes. Read the release notes below for details.

For more information about the previous main release, see the Unity 5.3 Release Notes.

IMPROVEMENTS

  • GI: Optimized GISceneManager.Update in order to take less processor time on scene start (769044).
  • Graphics: D3D11 native plugin API now supports obtaining native texture type underpinning a RenderBuffer. (752855)
  • IL2CPP: Removed warnings from generated C++ code when compiling with clang.
  • Smart TV: Correspond 2016 TV’s fonts and remote controller.
  • Substance: A FreezeAndReleaseSourceData() method was added to the ProceduralMaterial class. This renders the ProceduralMaterial immutable and releases some of the underlying data to decrease the memory footprint. To release even more of the underlying data, it is necessary to call Resources.UnloadUnusedAssets() afterwards. Once frozen, the ProceduralMaterial cannot be cloned, its ProceduralTextures cannot be rebuilt, nor its inputs be set.
  • VR: Mask invisible pixels so GPU time is not wasted near screen edges (Oculus SDK 1.0+).
Unity - What's new in Unity 5.3.3

Unity – What’s new in Unity 5.3.3

CHANGES

  • Audio/Scripting: An optimization was reverted while fixing (739224) and (761360) which would cause memory to once again be allocated during audio callbacks.
  • BlackBerry: Removed BlackBerry option from build player settings window.

FIXES

  • 2D: Fixed memory leak when applying changes to sprite. (754282)
  • 2D: Occlusion Culling works correctly with X/Y flipped SpriteRenderers. (760062)
  • Android: Fixed internal profiler on Gear VR. (741003)
  • Android: Fixed missing styles.xml files. (768027)
  • Android: Fixed remote frame debugger. (742199)
  • Android: Marshmallow – Added the possibility to disable the permission dialog by adding metadata to the activity.
  • Android: Mono – Fixed crash on startup with Unity Ads when stripping is enabled. (755510)
  • Android: Timeout no longer happens when an application is sent to the background. (738843)
  • Animation: Fixed a crash when changing OverrideController on Animator with no Avatar. (741043)
  • Animation: Fixed a crash where assigning an override controller with no controller to override. (764778)
  • Animation: Fixed changing Animator.runtimeAnimatorController while in play mode crashing the editor. (731237)
  • Animation: Fixed Euler angles on rotation causing Transform to be set to NaN in some cases. (759408, 760759)
  • AssetBundle: Change to use natural sorting when listing the AssetBundle names. (736556)
  • AssetBundle: Fixed loading error for asset bundles built with DisableWriteTypeTree flag. (756567)
  • AssetBundle: Fixed the hash collision when building AssetBundles. (716166)
  • AssetBundle: Fixed the issue that LoadAsset(name) returns null if a bundle contains a prefab and another asset with the same name. (743704)
  • AssetBundle: Loading multiple invalid asset bundles fails correctly now. (756198)
  • AssetBundles: Fixed AssetBundle.CreateFromFile retaining file descriptor. – The previous fix was incomplete. (715753)
  • Audio: Fixed mixer reverb effects getting cut off early in standalone builds. (760985)
  • Audio: Avoid random crashes when using audio callbacks in scripts. (739224, 761360)
  • Core: Fixed crash when game object that is a child of a missing prefab is deleted. (757799)
  • Core: Improved the error message when build data files are corrupted or from a mismatched version.
  • Core: Make sure persistent transforms are not added to the active scene when running in a player. (758058)
  • Editor: Fixed issue of a new scripts created in Editor folder if Unity installation path contains “test” word. (761664)
  • Editor: Fixed a bug where Undo recording would insert property modification on the prefab asset if it was being edited in the inspector. (711720)
  • Editor: Fixed crash when entering playmode if LoadScene was called during Awake or Start. (756409, 760459)
  • Editor: Fixed crash when replacing prefabs with Alt button pressed. (753176)
  • Editor: Fixed editor freeze when picking in scene with many overlapping game objects. (730441)
  • Editor: Fixed freeze/crash on project startup when async upload buffer is set too small. (754704)
  • Editor: Fixed game object duplicates on play when reference to that game object is set in another scene. (750117)
  • Editor: It is now possible to replace a prefab asset with a different prefab asset. (761995)
  • Frame Debugger: Even when it was not used, it was creating some overhead in development standalone builds. Reduced that.
  • GI: Fix for lightmaps in linear lighting mode looking different between the player and editor. (724426)
  • GI: Fix for LoadLevel in the player causing lightmaps to become brighter when in Linear mode. (728595, 738173)
  • GI: Fixed brightly coloured (green/red/white/blue) pixels appearing in the directional lightmap caused by interpolation from invalid indirect lightmap data. Note: Need to clear the GI cache and rebake to get the fixed lightmaps. (765205, 734187)
  • GI: Fixed missing Ambient when “Baked GI” is disabled and “Ambient GI” is set to Baked. (756506)
  • Graphics: Fixed a crash in the editor when switching graphics API from a non-DX9 API e.g. DX11. (740782)
  • Graphics: Fixed an issue where Standard shader using directional lightmaps could output NaN to the framebuffer. This would usually blow up when using HDR rendering with Bloom.
  • Graphics: Fixed building shaders correctly for WebGL in AssetBundles. (746302)
  • Graphics: Fixed crash when calling Graphics.DrawMesh with null material. (756849)
  • Graphics: Fixed crash when using GL.Begin quad rendering with non-multiple-of-4 vertex count. (761584)
  • Graphics: Fixed video memory leak in the splash screen animation.
  • Graphics: Fixed occasional Movie Texture crash with multiple movies present. (753593, 764084)
  • Graphics: Fixed profiling related information (SetGpuProgramName) performance issue in development player builds.
  • Graphics: Fixed rendering of deferred reflections when last object rendered before them had negative scale. (757330)
  • Graphics: Fixed skinned mesh memory leak. (760665)
  • Graphics: In Editor OpenGL ES 2.0 emulation increase max cubemap size to 1024 (from 512). (650870)
  • Graphics: Reduced framerate spikes where culling system could sometimes stall for several ms while waiting for jobs.
  • Graphics: Textures imported as cubemaps now are properly marked as non-readable if import option says so. Saves memory! (724664)
  • IL2CPP: Avoid crash on IL2CPP when searching for attributes. (766208)
  • IL2CPP: Avoid double allocation of memory for multi-dimensional arrays. (766168)
  • IL2CPP: Fixed performance regression in LivenessState calculation. The performance is back to where it was prior to 5.3.2p2.
  • IL2CPP: Fixed an occasional crash when capturing managed heap when parts of it are not committed.
  • IL2CPP: Fixed Array.Copy when destination array type is wider than source array type (e.g. int[] -> long[]). (741166)
  • IL2CPP: Fixed Stfld/Ldfld opcode usage generated by MS C# compiler. (761530)
  • IL2CPP: Fixed Unity IAP on Android with IL2CPP. (761763)
  • IL2CPP: Generate proper C++ code for marshaling wrappers of methods that have System.Guid as a parameter type. (766642)
  • IL2CPP: Implemented support for Assembly.GetReferencedAssemblies and Module.GetTypes() (724547)
  • IL2CPP: Properly marshal arrays of four-byte bool values. (767334)
  • IL2CPP: Raised NullReferenceException when Ldvirtftn instruction had a null target. (766208)
  • iOS: Added missing icon for iPad Pro. (755415)
  • iOS: Fix for WWW deadlock. (759480)
  • iOS: Fixed a crash triggered by deactivating an input while app is going into background. (760747)
  • iOS: Fixed an issue where attached controllers were not found. (761326)
  • iOS: Fixed application freeze on iphone4 when rotating device. (761684)
  • iOS: Fixed code completion for iOS Editor Extensions. (759212)
  • iOS: Handheld.PlayFullScreenMovie only allows playing one movie at a time.
  • iOS: Notify Transport that we finished receiving data so we can mark the buffer as complete when we get an error. (761361)
  • iOS: While entering background/foreground, improve player pause/resume handling to check if external parties (like video player) currently manage the paused state. (534752)
  • iOS/IL2CPP: Prevent a managed exception on 64-bit builds during some array creation operations which has the message “ArgumentException: Destination array was not long enough. Check destIndex and length, and the array’s lower bounds”. (765910)
  • iOS/OSX: Fixed SIMD math, which fixes skinning on iOS and source code compilation on OSX. (754816)
  • iOS/Video: AVKit based player didn’t show “done” button on iOS 8+. (736756)
  • iOS/Video: Fixed MPMoviePlayer error handling for invalid files.
  • iOS/Video: Improved MPMoviePlayer/AVKit orientation and view controller handling. (746018, 729470)
  • iOS/Video: Scaling mode behaviour fixes for iPad Pro. (745346)
  • iOS/Xcode: Added .tbd extension support.
  • Linux: Fixed flickering/corrupted rendering with OpenGL Core. (770160)
  • Linux: Fixed non-native-resolution fullscreen rendering with OpenGL Core. (763944)
  • Mono: Corrected a crash in mono_string_to_utf8_checked when Marshal.StructureToPtr is called from managed code. (759459)
  • Mono: Resolved intermittent crash caused by a race condition that occurs when using managed threads.
  • MSE: Fixed the issue that calling SceneManager.LoadScene** while exiting playmode causes scene unremovable from the hierarchy. (756218)
  • MSE: Fixed the issue that SceneManager.sceneCountInBuildSettings gives 0 until entering play mode. (754925)
  • Networking: Fixed issue where NetworkManager doesn’t become “ready” if online scene is set and offline scene is not. (734218)
  • Networking: Fixed issue where OnStartAuthority is called twice on hosts. (748967)
  • Networking: SyncLists now only send updates when values change. (738047)
  • OpenGL Core: Various bug fixes to the shader compiler; often resulting in better performance for complex image effect shaders.
  • OpenGL Core: Fixed random crashes on compute shader dispatch. (761412)
  • Particles: Ensure consistent direction between 3D and 1D rotation. (760830)
  • Particles: Fix for terrains ignoring collision layers. (763041)
  • Particles: Fixed a collision crash. (757969)
  • Particles: Fixed IsFinite error spam with particles and second camera. (756786)
  • Particles: Fixed issue where particle system doesn’t play if method is called via Invoke. (757461)
  • Particles: Fixed issue where particle system is stopped and cleared and after that it won’t play when simulation space is set to local. (756971)
  • Particles: Fixed issue where particles are not drawn in the correct order on rotated particle systems. (696610)
  • Particles: Fixed issue where ParticleSystem.IsAlive() always returns True for particle systems with longer duration. (755677)
  • Particles: Fixed issue whereby particles systems are not looping correctly. (756742)
  • Particles: Fixed particle culling issues. (764701)
  • Particles: Fixed support for negative inherit velocity values. (758197)
  • Particles: Fixed the issue of particles disappearing after going offscreen and returning. (759502)
  • Particles: Fixed wrong culling of some particle objects caused by incorrect bounds calculation due to parent scaling. (723993)
  • Particles: Fixed: particle system not playing when triggered via Event Trigger. (756725)
  • Particles: Fixed: particle system only playing once. (756194)
  • Particles: Particles are now emitted with the correct position//rotation when using a Skinned Mesh Renderer or Mesh Renderer as shape. (745121)
  • Physics: Fixed center of mass and inertia tensor being reset after game object was reactivated. (765300)
  • Physics: Rigidbodies without non-trigger colliders can. have custom center of mass and inertia tensor again (763806)
  • Renderdoc: When making a RenderDoc capture from editor UI, make sure to include the whole frame including user script updates.
  • Scripting: Fixed issue that causes UnityScript to incorrectly detect some methods return type. (754405)
  • Scripting: Prevent the Particle System from being stripped if the Particle System Renderer is used and engine code stripping is enabled. (761784)
  • Shaders: Fixed a shader compiler crash if a compute shader declares a samplerstate that didn’t match the naming scheme.
  • Shadows: Changed light shadows near plane minimum bound to either 1% of range or 0.1, whichever is lower.
  • Shadows: Fixed “half of scene all in shadows” artifacts in some scene/camera setups. (743239)
  • Smart TV: Fixed a problem to show custom splash screen.
  • Sprites: Occlusion Culling works correctly with X/Y flipped SpriteRenderers. (760062)
  • Substance: Fixed corner cases of outputs not being impacted by any input not being generated. (754556, 534658, 762897)
  • Tizen: Fixed cursor initially starting in the wrong position on screen. (740180)
  • Tizen: Fixed OpenGL crashing issues on the Z300F.
  • Tizen: Input field will no longer show  when return is pressed on an empty entry. (740172)
  • Tizen: System permissions that are not required are no longer requested.
  • tvOS: Fixed build error with Xcode trampoline. (767645)
  • tvOS: Fixed Game Center score reporting due to incorrect API check. (755395)
  • UI: Fixed “Trying to add (Layout Rebuilder for) Content (UnityEngine.RectTransform) for layout rebuild while we are already inside a layout rebuild loop.” error. (739376, 740617)
  • UI: Fixed flickering/texture swapping issues. (753423, 758106)
  • UI: Fixed issue with incorrect accent calculation for non-dynamic fonts. (747512)
  • Upgrades: Delete the installed Playback Engines and Documentation before upgrading Unity. (756818)
  • VR: Dynamically switch to headset’s audio output / input driver (Oculus SDK 1.0+).
  • VR: Fixed audio redirection in standalone builds (Oculus SDK 1.0+).
  • VR: Fixed crash when trying to enter play mode when the Plugin was not loaded or the Oculus runtime was not installed. (759841)
  • VR: Fixed Skybox clipping issues. (755122, 717989, 734122)
  • VR: Fixed VR Focus and VR ShouldQuit not respecting notifications when the Device was disconnected.
  • VR: Fixed VR Splash screen color precision.
  • WebGL: Corrected the following compiler error which might occur in generated C++ code: “error: non-constant-expression cannot be narrowed from type ‘uintptr_t’ (aka ‘unsigned int’) to ‘il2cpp_array_size_t’ (aka ‘int’) in initializer list [-Wc++11-narrowing]”. (767744)
  • Windows Store : Building from Unity will no longer overwrite project.json file if it was modified in solution. (765876)
  • Windows Store : When building from Unity files in Visual Studio solution will not be overwritten if identical. (759735)
  • Windows Store: Fixed a “MdilXapCompile failed” error when trying to build Visual Studio project for Windows Phone 8.1. This used to happen when the Unity game had over 8000 classes across all assemblies. (762582)
  • Windows Store: Fixed a crash when loading C# type from plugin which was not included in the final build. (765893)
  • Windows Store: Fixed a crash which happened on “Windows N” versions when using IL2CPP scripting backend. (760989)
  • Windows Store: Fixed a rare crash in ARM linker (fatal error LNK1322: cannot avoid potential ARM hazard (QSD8960 P1 processor bug) in section #) when using IL2CPP scripting backend. (766755)
  • Windows Store: Fixed an issue which caused small tiles get copied to Visual Studio solution incorrectly for Windows Phone 8.1 SDK. (762926)
  • Windows Store: Fixed anti-alising when calling Screen.SetResolution on Universal Windows 10 Apps/.
  • Windows Store: Fixed Application.Quit() when using D3D project type or IL2CPP scripting backend. (764378)
  • Windows Store: Fixed error “Task ‘ExpandPriContent’ failed.” which occurred when trying to build an application package with IL2CPP scripting backend when using default Unity icons. (764632)
  • Windows Store: Fixed Visual studio graphics debugger crashing when trying to debug Windows Phone 8.1 projects.
  • Windows Store: Mouses and touches will work correctly after locking/unlocking the screen. (768929)
  • Windows Store: Screen.Resolution(x, y, true) will no longer ignore width and height, so you can set your desired resolution on Universal Windows 10 Apps.
  • Windows Store: Screen.resolutions will return a valid value. (748845)
  • WinRT/IL2CPP: Allow native DLLs to be loaded both with and without the .dll extension. (760041)
  • XboxOne: Fixed a bug with YUY2 processing on the XboxOne.
  • XboxOne/IL2CPP: Fixed a problem compiling generated C++ files when there is a space in the path to the project directory. (768193)

Head-mounted Displays (HMD)

Head-mounted Displays (HMD)

Head-mounted Displays (HMD)

What are Head-mounted Displays?

Head-mounted displays or HMDs are probably the most instantly recognizable objects associated with virtual reality. They are sometimes reffered to as Virtual Reality headsets or VR glasses. As you might have guessed from the name, these are display devices that are attached to your head and present visuals directly to your eyes. At a minimum, if a device conforms to those two criteria you may consider it an HMD in the broadest sense.

HMDs are not the sole purview of virtual reality, they have been used in military, medical and engineering contexts to name but a few. Some HMDs allow the user to see through them, allowing digital information to be projected onto the real world. Something which is commonly referred to as augmented reality.

When we look at the diversity of HMDs that exist today within the context of virtual reality, it becomes apparent that there’s much more to these devices than strapping two screens to your eyes. In order to allow for an immersive experience either as a personal media device or as a full-on virtual reality interface, there are a number of technologies that can be incorporated in an HMD. Let’s have a look at the most important ones you should be aware of.

Display Technology

Clearly the display is one of the most important components in an HMD. After all it’s the part of the device you’ll be most conscious of during use. Today HMDs use various technologies to get pictures to eyeballs, but the most common display technology uses liquid crystals. More commonly known as an LCD panel, the same type of panel used in smartphones, televisions and computer monitors. Another similar looking display technology known as OLED (Organic Light emitting Diode) is also finding its way into these devices and there are HMDs with OLED displays out there already.

Pixels and displays

Thanks to smartphones and tablet computers there has been somewhat of an arms race to produce small displays only a few inches across with very high pixel densities. Pixel (short for picture elements) are the little dots that make up a picture. The more of them you have in every square inch of display the crisper the image. According to Steve Jobs, the late CEO and founder of Apple corporation, once you have more than 300 pixels per inch (ppi) the human eye can no longer discern individual pixels at 10 to 12 inches. High end phone displays are now heading for double that pixel density, which means for normal smartphone use that extra density is wasted. However, in an HMD where your eyes are only a few inches from the display that extra pixel density can mean the difference between crisp images and a fuzzy mess.

Retinal projection

Another display technology that hasn’t yet seen widespread use, but does exist in some headsets such as The Avegant Glyph, is retinal projection. They use tiny digital projectors that use microscopic mirrors to project onto your retina. Effectively using the back of your own eyeball as the screen. Proponents of retinal projection claim many advantages in terms of quality and eye strain compared to LCD and OLED HMDs, but due to the current state of the technology retinal projection cannot yet provide the immersive field of view that other HMD technologies can.

Two final aspects of HMD displays that are quite important are refresh rate and latency.

Sony HMZ-T3W Head Mounted 3D Viewer

Sony HMZ-T3W Head Mounted 3D Viewer

Refresh rate

Refresh rate refers to how quickly a display can change its contents within a span of time. Typically LCD computer monitors can do this 60 times per second or at 60Hz. This also corresponds to a maximum frame rate of 60 frames per second. One frame being one complete and discrete picture on the screen. Cinematic film typically runs at a framerate of 24fps. Lately some newer films like The Hobbit have transitioned to 48fps. To audiences this makes the film appear very smooth and “hyper real”, something that has had a mixed reception. For web video such as that found on YouTube 60fps is starting to gain support, especially for action film taken with cameras such as the GoPro. To put it simply, the more frames you display in a second, the smoother and crisper motion appears. Since virtual reality is meant to enable a feeling or presence and immersion it’s fair to ask what the right refresh rate to achieve that would be. It turns out that 60fps is a working minimum, but 90fps appears to be the sweet spot. Some HMDs even support 120Hz refresh rates.

Latency

Latency is the time gap between an input and an output. For example, if you turn your head in a virtual reality world, but the picture takes a second or two to catch up to your new head position, you are experiencing severe latency. In order to fool your brain’s visual system, virtual reality requires very low latencies. Usually 20ms or less for an absolutely top-notch experience. Unfortunately latency is not a simple issue to resolve and it isn’t solely the result of your display choice. The total latency between input and output is the result of the entire chain between those two points. From the positional sensors to the computer hardware rendering the image to the display itself, each component adds a small delay to the total time. Therefore a low-latency display is a must, but it is not always enough by itself.

Optics

If you were to take a phone LCD display and hold it to your face, chances are it wouldn’t do much for you. In order to create the immersive feeling of being in a virtual world it is necessary to take the flat image on the screen and magnify it to fill our visual field. Careful experimentation by a team at the University of Southern Carolina indicated that any HMD that wanted to achieve the edge-less, immersive visuals needed for convincing virtual reality would need a field of view (FOV) of between 90 and 100 degrees. The lenses in an HMD play a key role in taking the flat image on the screen and turning it into something that fills a substantial area of our visual field. Our field of vision isn’t rectangular like a screen, nor is it flat, so optical trickery is a necessity to make the illusion work. There are many different optical designs for HMDs and also different approaches to what lenses should be used and why, but one universal is that the quality of the lens is important. An HMD that uses cheap lenses may have poor picture quality, clarity and unwanted distortion. Often the most drastic after-market upgrade that can be done on an HMD is the installation of superior lenses.

Head Tracking

It’s all good and well that you can see the picture clearly, but without knowing the position of your head the computer doesn’t know where you are looking. Modern HMDs use various technologies in order to accurately track head position. Thanks to advances in smartphone technology we can now put a multi-axis accelerometer on a chip and infrared tracking cameras can accurately watch markers on the HMD, relaying positional data to the computer. Mobile HMDs that are not for use in a fixed location can’t make use of external camera tracking, for obvious reasons, but some new technologies such as the Microsoft Hololens and Google Project Tango can use multiple sensors in addition to accelerometers for positional calculation.

It’s important to note that some HMDs, especially those that use your smartphone, can only track what direction you are looking. Dedicated HMDs often track another axis, also letting you “lean” in for a closer look. This is an important element of immersion, since that one of the ways we look at real objects in the real world.

Eye Tracking

At the time of writing only one HMD, the FOVE, promises to integrate eye tracking technology. 3rd parties are however offering upgrade packages for other HMD products.

Eye tracking allows the HMD to calculate where your eyes are looking and then do something with that information. For example, it could change the depth of field of the visuals on screen to simulate natural vision more closely, virtual characters can now react to your gaze or you can now use your eyes to quickly select menu items in the virtual world.

Eye tracking could be a very important input for general purposes, allowing us to interact with user interfaces in more natural ways.

It is still early days for eye tracking technology in virtual reality, only time will tell what use cases developers will come up with.

Audio Hardware

There isn’t much to say about audio in HMDs, some HMDs include headphones and others do not. More often than not you will have the option of using your own headphones, with any provided pair being removable, There are a range of audio options available, including positional, multi-speaker headsets.

Computer Hardware

An HMD is both an input and an output device, tracking your head movements and relaying graphics to your eyes. In between those two processes lies computing hardware. There are really only three categories of HMD here. The first is completely self-contained and possesses all the computer hardware necessary for VR within the HMD itself or otherwise attached to the body. These are mobile, battery powered systems. Usually this hardware is repurposed from smartphones or might literally use a smartphone to perform the needed tasks. The second type of HMD does not have any onboard computing power, but interfaces with an external computer. Usually the HMD accepts a High-Definition Multimedia Interface *(HDMI) input and uses a Universal Serial Bus (USB) connector to send head tracking data. The third class of device is one that acts as both, having its own onboard hardware, but also allowing input from external devices.

Although smartphone hardware has become powerful enough to provide reasonable virtual reality experiences, they still lag far behind what is possible  with powerful computer hardware or the major mainstream video game consoles. In terms of pure visual fidelity and frame rate therefore, dedicated external computers are still the best choice. Using such a computer for virtual reality in future doesn’t need to leave us tethered to our desks though. Wireless display links exist, but getting them to work for virtual reality within the tight latency requirements is easier said than done.

Other Hardware

Now we are left with more mundane things such as the housing and other creature comforts. HMDs are made from all sorts of materials: cardboard, plastic, metal and anything else that will hold the parts together. It’s important to consider what adjustments are available on a particular HMD. The adjustment range of the headstrap is important in this regard. If you wear glasses make sure the HMD will accommodate them or allow for lens adjustments that makes them unnecessary. Finally, the comfort padding and ergonomics of the HMD are often overlooked, but very important. After all, the HMD spends a lot of time strapped to the user’s face.

Companion Input Devices

As mentioned above, the HMD can capture information about your head position, but unless you are happy to stand in one spot without moving or interacting with anything, more forms of input are needed. We deal with these input devices in detail in the appropriate section of the site, but for the sake of completeness in this overview it is worth mentioning a few. At present the most mainstream way of navigating virtual worlds is with existing videogame peripherals. These include gamepads, flight sticks, racing wheels and of course the keyboard and mouse. Several more immersive devices meant specifically for VR are available or in development, such as omnidirectional treadmills and specialised devices such as the SteamVR controllers.

At the very high end you might find full-body suspension and motion tracking systems, active mechanical force feedback or elaborate hydraulic vehicle simulation rigs. These all work in collaboration with the HMD to allow for interactivity and even greater immersion.

So How Does It All Work?

Setting aside less common technologies such as retinal projection, most HMDs that use LCD or OLED displays work by presenting each eye with a similar, but slightly offset image. This provides the illusion of stereoscopy. What most people think of as 3D imagery. As you might guess this needs a separate display for each eye, but in order to save on cost and complexity most HMDs use a single display panel that shows both images, but uses a plastic divider to prevent each eye from seeing the other eye’s image.

The actual images do not fill the display from edge to edge and are not perfectly square. If you were to look at the screen directly you’d see two images with fuzzy grey edges, this is a simulation of our visual field with the sharp image at the centre with curvature and gradual loss of acuity towards the edges of the image. Viewed through the lenses at the right distance, the picture neatly fits into our visual field and appears natural, as if we are looking at the real scene, not a picture of it.

So, when it all comes together you will feel like you are present in a virtual world. Wherever you look, you will see a virtual reality, replacing the real world around you. This is how the HMD achieves the illusion of virtual reality.

Conclusion

This was a broad overview of HMDs, be sure to check our in depth articles on individual HMD products that are on the market or are in development. Armed with the knowledge above you will have no trouble at all understanding the range and variety of devices on offer.

Virtual Reality in the Military

Virtual Reality in the Military

Virtual Reality in the Military

Virtual reality has been adopted by the military – this includes all three services (army, navy and air force) – where it is used for training purposes. This is particularly useful for training soldiers for combat situations or other dangerous settings where they have to learn how to react in an appropriate manner.

A virtual reality simulation enables them to do so but without the risk of death or a serious injury. They can re-enact a particular scenario, for example engagement with an enemy in an environment in which they experience this but without the real world risks. This has proven to be safer and less costly than traditional training methods.

Military uses of virtual reality

These include:

  • Flight simulation
  • Battlefield simulation
  • Medic training (battlefield)
  • Vehicle simulation
  • Virtual boot camp

Virtual reality is also used to treat post-traumatic stress disorder. Soldiers suffering from battlefield trauma and other psychological conditions can learn how to deal with their symptoms in a ‘safe’ environment. The idea is for them to be exposed to the triggers for their condition which they gradually adjust to. This has the effect of decreasing their symptoms and enabling them to cope to new or unexpected situations.

This is discussed further in the virtual reality treatment for PTSD (post traumatic stress disorder) article.

Virtual reality parachuting simulation

Virtual reality parachuting simulation

VR equipment and the military

Virtual reality training is conducted using head mounted displays (HMD) with an inbuilt tracking system and data gloves to enable interaction within the virtual environment.

Another use is combat visualisation in which soldiers and other related personnel are given virtual reality glasses to wear which create a 3D depth of illusion. The results of this can be shared amongst large numbers of personnel.

Find out more about individual uses of virtual reality by the different services, e.g. virtual reality navy training in the separate virtual reality and the military section.

This section discusses the various military applications of virtual reality and the ramifications from using this form of technology. The military may not be an obvious candidate for virtual reality but it has been adopted by all branches – army, navy and air force.

What the military stress is that virtual reality is designed to be used as an additional aid and will not replace real life training.

This section discusses all aspects of how virtual reality is used by military, from training through to combat situations. It is arranged as follows:

  • Virtual reality war
  • Virtual reality and the Army
  • Virtual reality and the Navy
  • Virtual reality and the Air force
  • Virtual reality army training
  • Virtual reality army exercises
  • Virtual reality air force training
  • Virtual reality navy training
  • Virtual reality combat training
  • Virtual reality combat simulation
  • Virtual reality military weapons
  • Virtual reality military history

Each of these subjects is discussed as a separate article.

What is apparent is that virtual environments are ideal set ups for military training in that they enable the participants, i.e. soldiers, to experience a particular situation within a controlled area. For example, a battlefield scenario in which they can interact with events but without any personal danger to themselves.

The main advantages of this are time and cost: military training is prohibitively expensive especially airborne training so it is more cost-effective to use flight simulators than actual aircraft. Plus it is possible to introduce an element of danger into these scenarios but without causing actual physical harm to the trainees.

Flight simulators are a popular theme in military VR training but there are others which include: medical training (battlefield), combat training, vehicle training and ‘boot camp’.

But another use and one which is not immediately thought of is virtual reality and post traumatic stress disorder (PTSD). PTSD or ‘combat stress’ has only recently been acknowledged as a medical condition but it causes very real damage to the person concerned and their family. Virtual reality is used to help the sufferer adjust to their symptoms and develop coping strategies whenever they are placed in a new situation.

This is discussed at greater length in our virtual reality treatment for PTSD article.

Generally, virtual reality training involves the use of head mounted displays (HMD) and data gloves to enable military personnel to interact with objects within a virtual environment. Alternately, they may be given virtual reality glasses to wear which display a 3D image.

Why Virtual Reality Will Finally Take Off In 2016?

Why Virtual Reality Will Finally Take Off In 2016?

Why Virtual Reality Will Finally Take Off In 2016?

Virtual reality has long been on the market, yet its adoption is still considered slow compared to other wearables. Late last year, Gartner reported that mature markets will use and own three to four devices per person by 2018, but it’s unlikely that bulky VR headsets will be one of them.

“Main devices will include smartphones, tablets, convertibles (two-in-one devices) and notebooks, and will contribute to more than two devices per person at any time,” according to Gartner’s report. “Niche devices will include a growing range of wearables such as smart watches, health bands, smart glasses and new types of connected devices such as smart cards, e-readers and portable cameras.”

However, regardless of the consumer’s hesitancy to virtual reality technology it is expected to see significant growth this year. Here’s why we think so:

virtual reality taking off
Mobile Devices to Drive Adoption

Smartphones and tablets are now becoming more powerful than ever before, offering gamers seamless experiences while playing high definition and graphic intensive games. From premium handsets to even the budget-friendly ones, all are now offering a great gaming experience to users.

In particular, industry leaders like Apple is said to be focusing on improving gaming capabilities, especially on the recent iPhone 6s, with it having the ability to run HD games from iTunes. O2 mentioned that the handset now runs the most advanced A9 chipset with 64-bit architecture and integrates a M9 motion coprocessor into the handset.

The company is also reported to have hired hundreds of people to work secretly for their VR team, which have been building prototype headsets for several months now, according to technology website, The Verge.

Of course, other mobile manufacturers have already showcased their best VR headsets to the world, including major players in the smartphone arena such as Sony, Samsung, and even Microsoft.

vr competition
Competition Will Drive Prices Lower

Although this isn’t going to happen anytime soon, experts have been predicting that the competitive VR market place will result in lower prices. We recently heard that Facebook was joining the craze as Mark Zuckerberg announced they successfully acquired Oculus Rift in March 2014.

“Imagine enjoying a courtside seat at a game, studying in a classroom of students and teachers all over the world or consulting with a doctor face-to-face – just by putting on goggles in your home,” he wrote when he announced Facebook’s acquisition of the company. “Virtual reality was once the dream of science fiction. But the internet was also once a dream, and so were computers and smartphones.”

Some tech companies outside the US are also working on their own VR headsets, which are set to be revealed this year. The increasing amount of competition will force some companies to lower their prices down to get more people to purchase their devices.


Investors Are Eager to See Result

For the longest time, the public were only exposed to a slew of prototype VR headsets, with the majority of these predicted unaffordable for the common consumer. Thus, many investors were uncertain whether this technology would ever prosper.

But, it seems investors are now more eager to see initial proof of concept where a combined investment worth of $6.1 billion is predicted with a projected 55.8 million consumer base this year to aim at, as reported by Super Data Research. It is said to trigger current market momentum and the growing industry expectations.

Perhaps, the only problem left here is how well the tech companies are able to sell these technologies, as “virtual reality is notoriously hard to actually sell.” Given that it’s still considered an optional gaming peripheral, many gamers don’t see the real importance of investing in VR headsets. However, once more VR games hit the market then an expected spike of gamers will draw more interest towards the headsets.

IBM Watson Chief Technology Officer Rob High to Speak at GPU Technology Conference

IBM Watson Chief Technology Officer Rob High to Speak at GPU Technology Conference

IBM Watson Chief Technology Officer Rob High to Speak at GPU Technology Conference

Highlighting the key role GPUs will play in creating systems that understand data in human-like ways, Rob High, IBM Fellow, VP and chief technology officer for Watson, will deliver a keynote at our GPU Technology Conference, in Silicon Valley, on April 6.

Five years ago, Watson grabbed $1 million on Jeopardy!, competing against a pair of the TV quiz show’s top past winners. Today, IBM’s Watson cognitive computing platform helps doctors, lawyers, marketers and others glean key insights by analyzing large volumes of data.

High will join a lineup of speakers at this year’s GTC that includes NVIDIA CEO Jen-Hsun Huang and Toyota Research Institute CEO Gill Pratt, who will all highlight how machines are learning to solve new kinds of problems.

Fueling an AI Boom

Watson is among the first of a new generation of cognitive systems with far-reaching applications. It uses artificial intelligence technologies like image classification, video analytics, speech recognition and natural language processing to solve once intractable problems in healthcare, finance, education and law.

GPUs are at the center of this artificial intelligence revolution (see “Accelerating AI with GPUs: A New Computing Model”). And they’re part of Watson, too.

IBM announced late last year that its Watson cognitive computing platform has added NVIDIA Tesla K80 GPU accelerators. As part of the platform, GPUs enhance Watson’s natural language processing capabilities and other key applications. (Both IBM and NVIDIA are members of the OpenPOWER Foundation. The open-licensed POWER architecture is the CPU that powers Watson.)

GPUs are designed to race through a large number of tasks at once, something called parallel computing. That makes them ideal for many of the esoteric mathematical tasks that underpin cognitive computing, such as sparse and dense matrix math, graph analytics and Fourier transforms.

NVIDIA GPUs have proven their ability to accelerate applications on everything from PCs to supercomputers using all these techniques. Bringing the parallel computing capabilities of GPUs to these compute-intensive tasks allows more complex models to be used, and used quickly enough to power systems that can respond to human input.

Rob High, IBM Fellow, VP and chief technology officer for Watson, will speak at our GPU Technology Conference

Rob High, IBM Fellow, VP and chief technology officer for Watson, will speak at our GPU Technology Conference

Understanding Language

The capabilities brought to Watson from GPUs are key to understanding the vast sums of data people create every day — a problem that High and his team at IBM set out to solve with Watson.

With structured data representing only 20 percent of the world’s total, traditional computers struggle to process the remaining 80 percent of unstructured data. This means that many organizations are hampered from gathering data from unstructured text, video and audio that can give them a competitive advantage.

Cognitive systems, like Watson, set out to change that by focusing on understanding language as the starting point for human cognition. IBM’s engineers designed Watson to deal with the probabilistic nature of human systems.

Dive in at Our GPU Technology Conference

Our annual GPU Technology Conference is one of the best places to learn more about Watson and other leading-edge technologies, such as self-driving cars, artificial intelligence, deep learning and virtual reality.



Popular Pages
  • CV Resume Ahmadrezar Razian-سید احمدرضا رضیان-رزومه Resume Full name Sayed Ahmadreza Razian Nationality Iran Age 36 (Sep 1982) Website ahmadrezarazian.ir  Email ...
  • CV Resume Ahmadrezar Razian-سید احمدرضا رضیان-رزومه معرفی نام و نام خانوادگی سید احمدرضا رضیان محل اقامت ایران - اصفهان سن 33 (متولد 1361) پست الکترونیکی ahmadrezarazian@gmail.com درجات علمی...
  • Nokte feature image Nokte – نکته نرم افزار کاربردی نکته نسخه 1.0.8 (رایگان) نرم افزار نکته جهت یادداشت برداری سریع در میزکار ویندوز با قابلیت ذخیره سازی خودکار با پنل ساده و کم ح...
  • Tianchi-The Purchase and Redemption Forecasts-Big Data-Featured Tianchi-The Purchase and Redemption Forecasts 2015 Special Prize – Tianchi Golden Competition (2015)  “The Purchase and Redemption Forecasts” in Big data (Alibaba Group) Among 4868 teams. Introd...
  • Brick and Mortar Store Recommendation with Budget Constraints-Featured Tianchi-Brick and Mortar Store Recommendation with Budget Constraints Ranked 5th – Tianchi Competition (2016) “Brick and Mortar Store Recommendation with Budget Constraints” (IJCAI Socinf 2016-New York,USA)(Alibaba Group...
  • Drowning Detection by Image Processing-Featured Drowning Detection by Image Processing In this research, I design an algorithm for image processing of a swimmer in pool. This algorithm diagnostics the swimmer status. Every time graph sho...
  • Shangul Mangul Habeangur,3d Game,AI,Ahmadreza razian,boz,boz boze ghandi,شنگول منگول حبه انگور,بازی آموزشی کودکان,آموزش شهروندی,آموزش ترافیک,آموزش بازیافت Shangul Mangul HabeAngur Shangul Mangul HabeAngur (City of Goats) is a game for child (4-8 years). they learn how be useful in the city and respect to people. Persian n...
  • 1st National Conference on Computer Games-Challenges and Opportunities 2016-Featured 1st National Conference on Computer Games-Challenges and Opportunities 2016 According to the public relations and information center of the presidency vice presidency for science and technology affairs, the University of Isfah...
  • Design an algorithm to improve edges and image enhancement for under-sea color images in Persian Gulf-Featured 3rd International Conference on The Persian Gulf Oceanography 2016 Persian Gulf and Hormuz strait is one of important world geographical areas because of large oil mines and oil transportation,so it has strategic and...
  • 2nd Symposium on psychological disorders in children and adolescents 2016 2nd Symposium on psychological disorders in children and adolescents 2016 2nd Symposium on psychological disorders in children and adolescents 2016 Faculty of Nursing and Midwifery – University of Isfahan – 2 Aug 2016 - Ass...
  • MyCity-Featured My City This game is a city simulation in 3d view. Gamer must progress the city and create building for people. This game is simular the Simcity.
  • ببین و بپر - Watching Jumping ببین و بپر به زودی.... لینک صفحه : http://bebinbepar.ir
Popular posts
Interested
About me

My name is Sayed Ahmadreza Razian and I am a graduate of the master degree in Artificial intelligence .
Click here to CV Resume page

Related topics such as image processing, machine vision, virtual reality, machine learning, data mining, and monitoring systems are my research interests, and I intend to pursue a PhD in one of these fields.

جهت نمایش صفحه معرفی و رزومه کلیک کنید

My Scientific expertise
  • Image processing
  • Machine vision
  • Machine learning
  • Pattern recognition
  • Data mining - Big Data
  • CUDA Programming
  • Game and Virtual reality

Download Nokte as Free


Coming Soon....

Greatest hits

Anyone who has never made a mistake has never tried anything new.

Albert Einstein

Waiting hurts. Forgetting hurts. But not knowing which decision to take can sometimes be the most painful.

Paulo Coelho

Gravitation is not responsible for people falling in love.

Albert Einstein

One day you will wake up and there won’t be any more time to do the things you’ve always wanted. Do it now.

Paulo Coelho

The fear of death is the most unjustified of all fears, for there’s no risk of accident for someone who’s dead.

Albert Einstein

It’s the possibility of having a dream come true that makes life interesting.

Paulo Coelho

You are what you believe yourself to be.

Paulo Coelho

Imagination is more important than knowledge.

Albert Einstein


Site by images
Recent News Posts