Now Hiring Student Programmers & Graphic Artists/3D Modelers

The Center for Simulations & Virtual Environments Research (UCSIM) is now hiring student worker positions!

If you or someone you know may be interested in working at UCSIM, please send them a link to this post. We are hiring both graphic artists/3D modelers and programmers to join our team creating state-of-the-art virtual learning environments, educational simulations, and serious games.

See the job descriptions below for more information and email us at ucsim@uc.edu if interested.

Programmer

Job Description:  The UCIT Center for Simulations & Virtual Environments Research seeks talented programmer with previous game or simulation programming experience for exciting student worker position developing virtual learning environments, educational simulations, and serious games.  This position offers a wide variety of opportunities to apply your creativity and skills, including designing and writing code for game-play and simulation systems using the Unity3D game engine and Opensimulator platforms; designing, testing and refining interaction and UI elements; performing QA to debug, document, and deliver top quality work; and working with a team of artists and developers to create the best virtual experience possible within time and budget constraints.  Preference will be given to applicants in computer science, IT, game design, or related fields of study, or who have programming experience with the Unity3D game engine or similar game engines.  The ideal applicant has a great sense of humor and attitude, a strong work ethic, enjoys working independently or in a team environment depending upon the project, and is passionate about gaming, virtual environments, and the emerging metaverse.  Position provides flexible scheduling around your classes and academic requirements.

Minimum Qualifications  

  • Must be enrolled at least part-time (6 credit hours for undergraduate students and 5 credit hours for graduate students), with the exception of summer term, or will be a registered student during the next academic semester for summer work.
  • Must be age 18 or older.
  • Must be available to work throughout the summer term.
  • Excellent verbal and written communication skills.
  • Programming experience in JavaScript, C#, .NET, C++ or related languages.
  • Web development experience, which should include some combination of experience with PHP, IIS, Apache, Python, MySQL, CSS, HTML, RSS, XHTML, or XML.
  • Applicants must have excellent problem solving skills, good code hygiene, and should be able to coordinate well with other programmers on the team.
  • Must have a high comfort with using discussion boards, blogs, wikis, Twitter, and other social media technologies.

Ideal Qualifications

  • Experience with the Unity3D game engine preferred, other game engine experience or development in OpenSimulator a plus.
  • Advanced programming skills, or knowledge of mobile development platforms such as Android and iOS a plus.
  • Experience in the areas of tools development, physics, shaders, and audio programming a plus.
  • Knowledge or previous experience with 3D content pipelines (Maya/Max/Photoshop into game engine) a plus.

Pay Rate

$10+ depending upon experience

Graphic Artist/3D Modeler

Unity_Beast_ProJob Description:  This position offers a wide variety of opportunities to apply your creativity and skills, including designing, storyboarding, and inspiring the visual elements of our simulations and games; developing hard surface and organic models, sets, and props; creating 2D textures and skins for 3D models; rigging and animating characters and NPCs; applying lightmapping and shading elements to scenes and sets; and working with a team of artists, programmers, and developers to create the best virtual experience possible within time and budget constraints.  Preference will be given to applicants in graphic communication design, electronic art, electronic design media, or related fields of study, or who have experience with the Unity3D game engine, Second Life, or OpenSimulator platforms. The ideal applicant has a great sense of humor and attitude, a strong work ethic, enjoys working independently or in a team environment depending upon the project, and is passionate about gaming, virtual environments, and the emerging metaverse.  Position provides flexible scheduling around your classes and academic requirements.

Minimum Qualifications  

  • Must be enrolled at least part-time (6 credit hours for undergraduate students and 5 credit hours for graduate students), with the exception of summer quarter, or will be a registered student during the next academic term for summer work.
  • Must be age 18 or older.
  • Must be available to work throughout the summer term.
  • Excellent verbal and written communication skills.
  • Experience with Maya, Blender, or 3Ds Max.
  • Graphic design experience using Photoshop or GIMP.
  • Applicants must be able to coordinate well with other artists and developers on the team.
  • Must have a high comfort with using discussion boards, blogs, wikis, Twitter, and other social media technologies.

Ideal Qualifications  

  • Experience with the Unity3D game engine preferred, other game engine experience or development in Opensimulator a plus.
  • Knowledge or previous experience with 3D content pipelines (Maya/Max/Photoshop into game engine) a plus.
  • Experience with sculpting in Zbrush or Mudbox.
  • Ability to rig and animate models.
  • Advanced 2D, graphic design and/or 3D modeling skills; drawing, sculpting, and other traditional art skills a plus.
  • Knowledgeable in other Adobe programs (Illustrator, Flash, etc).
  • Some programming and HTML experience a plus.

Pay Rate

$10+ depending upon experience

For more information, contact us at ucsim@uc.edu!

Second Life Support Sunsetting

ucsecondlife_feature

For the past several years, UCSIM has provided support for the Second Life service, including a virtual campus and virtual research areas.  However, as interest from faculty and students has waned, the decision was made to sunset support for this service to focus on newer, more scalable, and cost efficient platforms.

As announced in earlier communications to users, the main University of Cincinnati region in Second Life was archived and closed on April 2, 2014 and the POC-CENT region will continue to be available through August 2014.  Any remaining faculty or student projects in Second Life can continue to receive basic technical support from UCSIM through the end of the 2013-14 academic year.  At that time, we will formally end support for the Second Life platform.

Members of the university community who are interested in exploring virtual worlds or other VR platforms are encouraged to read more about the OpenSimulator service or our Unity platform services.  For questions or more information, please contact us at (513) 556-8562 or email ucsim@uc.edu.

UCSIM Team Attends the Cincinnati/Dayton VR Meetup

Oculus Rift demo

On Friday, February 28, 2014, members of the UCSIM team attended the Cincinnati/Dayton Virtual Reality Meetup. The meetup featured demos of the Oculus Rift, Durovis Dive, Razer Hydra, zSpace, CAVE, Leap Motion, Nvis HMDs, 3D projection systems, and a variety of virtual environments, VR software and platforms, and other related technology.

Hardware demos

“I was surprised at how well attended the meetup was, there were visitors from all over the region, including a lady who travelled all the way from Ann Arbor, MI,” said Chris Collins.  “There were students and academics from area schools and universities, of course, but also people from industry like GE and P&G, as well as plenty of hobbyists and gamers.”

Checking out VR systems hardwarethe CAVE

In collaboration with colleagues from the University of Cincinnati Center for Cognition, Action, & Perception and Cincinnati Children’s Hospital Medical Center TEAM VR Lab, Chris demoed the Oculus Rift and showcased  a few recent projects being developed at UCSIM, including environmental models that will be used for pediatric sports therapy and basic research.  “Many people hadn’t yet experienced the Rift, so it was fun to see their reaction for the first time,” she said.

Smale Interactive Visualization Center, Miami University

The event was hosted at Miami University in Oxford, OH by event organizers Eric Hodgson, Director of the Smale Interactive Visualization Center, and Reverend Kyle, host of the VR Podcast and contributor to the Road to VR website.

How to Use Microsoft Kinect with Unity

 

Last year, Microsoft and Unity created a strategic partnership and there has been significant collaboration and support between these two companies. One of the tools that emerged through this partnership was the added Kinect plugin for Unity that allows game developers to add motion controls to their games or simulations with ease. This is a brief overview of those features that are included in the plugin as well as quick step-by-step guide to aid those who are still unfamiliar with the initial configuration but wish to add motion controls using the Microsoft Kinect. All of the information I’m about to provide is a summary of this Unity wiki page.

The first step is to download the following files

Note: These scripts take advantage of a plugin feature only available for Unity Pro users.

Initial Setup

Once these files are finished downloaded, ensure that the Kinect device is not plugged into your machine, begin the installation of the Windows SDK and the Windows Developer Toolkit, and follow the on-screen prompts.

After the installation, we are ready to create a new Unity project. Name it KinectDemo or something similar to easily distinguish your project from older ones. Once Unity loads the default scene you can begin importing the assets from the Unity package that you’ve downloaded earlier.  While the project is currently open, extract the file and double click on the Unity package to start the importing process.

Once the package is finished importing you can safely plug in your Microsoft Kinect. Locate and find KinectSample scene that was provided. It is recommended to play and test this scene out to see if Kinect drivers were installed correctly. If it was successful, you should notice a RGB display screen, a grayscale depth screen, a rainbow colored humanoid model, and various spheres all translating in conjunction with your movements in real time.

If you are still having problems, uninstall the Kinect for Windows SDK and Kinect for Windows Developer Toolkit, unplug your Kinect, and retry the installation process again as described above. If there are no problems found, you are now ready to begin developing using the Microsoft Kinect and Unity.

Further Details

Taking a closer look at the project hierarchy within the KinectSample scene, you might have noticed two specific game objects labeled rainbowMan_v6 and KinectPointMan. Within those prefabs are two scripts that are responsible for capturing and mimicking your movements. These scripts are the KinectPointController and KinectModelControllerV2.

Both script require a skeleton wrapper reference to work. The KinectPrefab is the gameobject that needs to be referenced in the Sw placeholder in the inspector. This KinectPrefab will record motion data in real time and send the information to the controller scripts. The controllers scripts will convert this data and transform bones or Game objects that are related to specific parts of the body and mimic the relative placement in a 3d environment.

The Main difference between the KinectPointController and the KinectModelControllerV2 is how they reference each specific part of the body. The KinectPointController uses individual game objects. In the KinectSample scene they use spheres to represent the contact points. In practice you could replace these objects with a simple hand model or a pointer object that the user could control and interact with.

The KinectModelControllerV2 uses one specific rigged character model. A rigged model contains a digital skeleton bound to a 3D mesh. Expand your custom model fully so that each bone is visible in the hierarchy. Each digital bone that needs to be controlled must be referenced into the appropriate slot in the inspector. (See Video Below) After adding and configuring the KinectModelController, if you are having inaccuracies with your custom model (odd mesh behavior, joints in unrealistic locations) double check the placement of the bones within the rigged model as well as the weight distribution. For more information about creating custom models, explore tutorial and training videos for Blender or Maya.

Additional Features

The mask will filter and limit what joints need to be animated through the Kinect. Any bones or game objects that are excluded from this list will remain static. Therefore, if you want a game only using your hands and head, you can limit the range to track this by setting the mask to “Upper_Body”.

The player field specifies which user controls the model. To quickly add local multiplayer support, it can be as simple as duplicating your player model and changing the player value inside the inspector. The KinectPointman has scale will transform the size of all game objects that are referenced and used to translate player movements.

On the KinectModelControllerV2, if you intend to animate your model while the player is controlling it, set “animated” flag and determine how much blending between the animation and the Kinect should occur – this is a range from 0 – 1.

Quick review

  1. Before we begin, ensure that the Microsoft Kinect is NOT plugged into the machine before installation.
  2. Download the required files.
  3. Run the installer for the SDK and Developer Toolkit and follow the on-screen prompts.
  4. Create a new project within Unity and make sure to change it from the default project name.
  5. Extract the Kinect Wrapper Package for Unity and double click on the file to import all assets into the newly created project.
  6. Plug in the Microsoft Kinect device into the computer’s USB port.
  7. Inside the newly imported package is a KinectSample scene, open the scene and play it to verify motion controls are enabled.
  8. Develop the next big motion control simulation or game!

– Matt NeCamp

Recreating 3D Model Architecture: Knowing Your Environment

by Ian Anderson

Recreating a Three-dimensional model of architecture can pose many challenges when trying to remain accurate to the scale and placement of the building. Height and width, angle, visual depth, and color specificity can all be, at times, challenges when creating a structure. By keeping a few things in mind while creating the architecture, these challenges can be easily taken care of and save the designer time and frustration.

A photograph is always a helpful tool in remembering the position and placement of a building, but can lead to inaccurate scale and angles due to the two dimensional aspect of the photograph itself. Since your eyes create binocular vision and the camera is monocular, using only one lens, the camera cannot give you the dimensional information you need without taking multiple angled shots.

In this example, the photo on the left makes the three triangle shaped windows look uniform to the wall behind it. In actuality, the windows are at an angle leading away from the wall, which can be seen in the photo on the right. Even in the photo on the right, it is hard to get an accurate idea of where the angle ends.

Image of building showing angular windows

Physically visiting the building can help immensely when trying to understand the angles and sizes of certain geometry. The designer is able to view most areas of the architecture without being restricted by the linear view that comes with relying on a photograph alone. It is obvious that not every architectural structure is easily reachable or possible to visit, but if it can be seen in person it allows for greater understanding of the building itself.

Another helpful tool when seeing the building in person is being able to see an accurate form of the colors on the building. The sun and other sources of light can strongly influence the perception of the color of the building. Depending on the time of day and the position of the building, some walls will be over saturated, while others are not. Being able to walk around the building itself and finding the desired color can be very helpful when creating textures that are to be imported.

Image showing two brick textures

For example, these bricks shown are both the same color, but the one on the right is directly in the sunlight, while the other one is in the shade. The bricks can always be modified in Photoshop as well, but if accuracy is an issue it may be easier to try to get an accurate photo of the color first.

These are just a few tips that can be helpful when recreating a 3D model of existing architecture.  When in doubt, first hand knowledge by visiting the building is always best, but if that isn’t possible, having photographs from as many angles and in as many different types of lighting as possible can help inform your modeling choices.

Tips for Creating a Seamless Texture

by Alex Hand

 When recreating actual environments through the use of a 3D program it is helpful to create a set of textures that can be reused for several buildings. Using a single texture for multiple buildings can save time photographing and allow the artist to texture surfaces that may not be possible to accurately photograph in person. A brick texture can be used for many of the buildings exteriors located within the virtual UC campus. Just taking a photo of the brick wall and applying it to every face of the digital model most likely will result in a broken or distorted texture. In order to get the bricks to align properly a seamless texture must be created in Photoshop.

A seamless texture is an image that can be tessellated without awkward breaks or seams. When working with a patterned texture such as brick, it is important to use a photo that was taken parallel to the face of the wall so that there is little to no distortion to the scaling of the bricks. It is also better to start with a high resolution image, as it is always possible to scale down the resolution but it is not possible to increase it once it has been scaled down.

Layer via Copy in Photoshop

The first step in creating a seamless texture is to open the image in Photoshop and selecting the desired region to use as the texture. In order for the texture to tessellate the selected region must be square. Use the crop tool to cut the image down to a perfect square if it is not already. Typing in the same value, 1024px for example, into the width and height of the crop too will restrain the dimensions of the cropbox to a square. Once the image is cropped, the selection tool is used to select the left side about a third of the way into the image. Right click the selection and click on the Layer Via Copy option. Make sure that the background layer is editable otherwise Photoshop cannot properly complete this function.

With the new layer selected go to Edit/ Transform/ Flip Horizontal, then move this layer to the far right side of the image. The left and right sides of the image are now matching, but there probably is and awkward seam on the right hand side of the image. Use the eraser tool and clone stamp tool, with feathered edges, until the transition is less obvious. The two layers need to be merged after the new layer has been completed; press Shift+Ctrl+E to do this.

Finished seamless texture

Next select the top portion about one third of the way down. Copy the selection into a new layer and flip vertically, then place this piece at the bottom of the image and erase until a desired look is reached. Merge the visible layers to make them a single image, now it can be saved as a stamp by selecting the define stamp option under the edit tab. The now seamless texture can be painted into the UV map of the three dimensional object using the stamp tool.

Beast Lightmapping

by Alex Davis

UCSIM staff have taken some time to really understand just how powerful Beast Lighting is in Unity. Before finding Beast, most of the lightmapping done in our work used Turtle, which comes with Maya Entertainment Creation Suite, which were then exported into Unity. The problem was in order to have everything lightmapped, everything had to be done in Maya. This meant lightmaps couldn’t be created for any terrain asset in Unity.

Render using Maya

Later, a member of our team discovered Beast from this video. At first it looked like it had everything needed to make nicely lit environments. However, we had trouble replicating some of the things demonstrated in the video. After making a Cornell Box of our own and trying to light it using a shader, nothing appeared lit after the bake. Eventually we gave up and just used a point light. We baked again and we got some really nice shadows but couldn’t see any final gathering at all. After messing with the bake settings, it never seemed to go further than shadows.

Then a realization dawned – the free version of Unity may not support final gathering and light emitting shaders.   When we opened up the lightmapping window in Unity Pro, there it was.


With final gather we were now able to successfully replicate the results from the video. We didn’t add the skylight at the time, however, we tried the light emitting shader in another scene and it worked as well.


So for anyone using Beast on the free version of Unity, don’t worry – you are not doing something wrong. You can still use Beast to create some nice shadows. Apparently final gather and light emitting shaders can only be used in Unity Pro.

With good lightmapping a player will pay much less attention to the scene, which is what we aim to do. During a simulation, having many objects around the world might create too much “background noise” and the player’s immersion is lessened. With proper lighting, objects “blend” into each other and look like they were meant to be there instead of sticking out like a sore thumb. When the light mapping is done well, you don’t notice it, which is what makes Beast such a powerful tool.

NPCs in Unity3D

by Krishna Gadiraju

Over the past two weeks, we have been working on designing Non Player Characters (NPCs)  in Unity 3D, similar to the ones we have in Second Life and OpenSim. As my colleague Matt explained in a previous article here, NPCs can be used for various purposes, starting from holding a conversation with the user to helping test the environment.

We found some very useful tutorials that helped us in creating NPCs for the university campus demo site we are creating.

Audio, Text and Teleport:

To start off, we found this tutorial that helps in defining one of the most basic jobs for an NPC – identify the user when he/she comes close to the NPC and then play an audio file to provide dialogue to the user.

However, playing a single audio file isn’t enough. We also need the NPC to be able to hold a proper conversation with the user. A tutorial by Patrick Boelens, gives a partial solution to this problem. In this tutorial, Patrick defined an NPC which displays text on the screen using a GUIText object to hold a conversation with the user.

Unity Script: Creating a NPC talk system from CG Cookie on Vimeo.

We built on Patrick’s good work by adding an audio component to go with the text being displayed on the screen.  We also added a teleport script to the NPC that would teleport the user to a different location.

For example, consider the PlaySound function shown below. It is designed to play a sound and audio synchronously.


function PlaySound(sound:Object, text: String, talkGUIText : GUIText)
{
audio.clip=sound;
Debug.Log(audio.clip.name);
audio.Play();
talkGUIText.text=text;
}

The rest of the controls that Patrick mentions, can then be placed in an update function.

We are now working on fine-tuning the code for the NPC to take the user on a tour of the University of Cincinnati virtual campus.   We look forward to getting user feedback and hope to release the first demo soon!

 

OpenSim: New Force Attach Functions

by Matt NeCamp

One of the benefits of using open source software like Opensimulator (Opensim) is the ongoing development of new functions in the simulator code. Members of the community development team have provided several updates that add fantastic tools for scripters and programmers to make each of their own worlds more unique.

One of the recent updates released to the core system included additional functions to force prim objects to attach themselves to specific Avatars logged into the simulator. We can now simulate common object interactions more effectively by “force attaching” them to an avatar without any pesky pop up window notifications. This encourages a more immersive experience, opening up incredible learning possibilities.

These powerful Opensim Scripting Language (OSSL) functions have proven to be very helpful for a UCSIM project that we’ve been developing in collaboration with the Oak Hills School District, located on the west side of Cincinnati. Administrators in the Oak Hills School District are exploring ways to use virtual environments for professional development and for training and evaluating performance of new teacher candidates by simulating a classroom with students (adults acting as students) in a classroom setting.

In conjunction with this project, UCSIM has developed a Classroom (Mis)Behaviors Control Panel that allows the administrators to roleplay different kinds of student behaviors in the virtual environment. The users click on specific gestures on the Control Panel that triggers short sound effects along with playful animations for his or her avatar.

(MIS)Behaviors Control Panel

Screenshot of the Classroom (Mis)Behaviors Control Panel shown fully expanded.

 

Some of the behaviors, such as Wad Paper or Play Music On Phone, interact with additional props so that the user’s avatar appears to be holding a cell phone or paper wad. Without these new attach functions, we couldn’t provide the illusion of acting out these behaviors. These prop elements aid the role playing experience during the administrator’s performance.

In the Classroom (Mis)Behavior Control Panel we designed, all of the props used are attached to the avatar using the function osForceAttachToAvatarFromInventory().

// attaches a paperwad to the owner. osForceAttachToOtherAvatarFromInventory(llGetOwner(),"paperwad",ATTACH_RHAND);

Using this function, we can place a 3D model of the paper wad object inside the Control Panel object’s inventory, and then call the object with a script to attach itself to the user.

rolling up a paper wad

Student avatar appears to wad up a piece of paper.

 

The Classroom (Mis)Behaviors Control Panel is more successful at immersing users in the environment with the aid of the attach Function, however, we discovered that the current implementation of the function that makes this charade possible causes a mischievous problem.

Every single time the osForceAttachToAvatarFromInventory() is called, each attached paper wad or cellphone that was forced to attach itself to the avatar creates a duplicate item (the original item remains in the control panel’s inventory, the duplicate is created and then attached to the avatar), but upon detach when the animated behavior completes, the item that was attached to the avatar gets tossed back into the avatar’s inventory. This obviously causes a long term problem since hundreds of used paperwads and cellphones will clutter the user’s inventory over time, which also adds unnecessary bloat to the database where information about the user’s inventory is stored.

In different implementations, this behavior might not be an issue. For example, if we wanted to use this force attach function to attach a mining hat on a user who was participating in a simulation of a mining disaster, the function would only be used once, and the user would automatically wear the hat and then have a copy of that hat to keep. However, in a use case where the force attach function will be used over and over, such as in our implementation of the Classroom (Mis)Behaviors Control Panel, the creation of multiple duplicate objects that get stored in the user’s inventory over time is problematic.

Currently Opensim doesn’t have a function to fix this issue, but the beauty of open source projects like Opensim is that the software is fueled by volunteer developers who provide patches and fixes to resolve these kinds of problems. In identifying this problem with the existing function, UCSIM hopes to contribute a solution by developing a new OSSL function that works similarly to the existing function, but that would use a “temporary” object instead. When the “temporary” object is done being used, it will be deleted from the simulation and the database.

We are working on an implementation of this new osForceAttachToAvatarFromInventoryTemp() function in collaboration with another Opensim developer now, and we hope to contribute our efforts back to the Opensim code base to make this useful new function available to other Opensim programmers in the future.