Trace Bitmap in Adobe Flash

by Darrell Wilson

Flash is widely known as a hybrid animation and website interactive development tool. This specific program allows you to create vector images, which are algorithmic created lines and strokes you make on a screen. Since they are manipulated with a mathematical algorithm, they do not lose quality when you resize a particular image, unlike Photoshop which manipulates raster images.

Raster images lose quality and pixelate if you upscale its original size, however raster images can produce more realism in a picture than a vector image. Also, vector images can convert into raster images in Photoshop, but not vice-versa. This post will show you how to convert a raster image into a vector image by using the Trace Bitmap tool in Flash.

First, you will want to find what type of raster image you want to convert. A simple picture from your camera or one from the internet will be great. Import to Flash by dragging it, or properly importing it to the stage.

 

Next, open up Flash and modify your canvas size in the properties section to the size of your picture. Check to see if your picture is a bitmap as well.

 

Thirdly, convert it into a symbol by right clicking it once and scroll to the bottom. You should see a blue pixel highlight border around your picture after the conversion.

Finally, after re-clicking on your picture, on the menu at the top, click Modify >Bitmap > Trace Bitmap.

 

The result will leave you with a rather nice image that you can resize anytime you want!

Opensim Server Downtime 10/24/12 – Completed

Update:  The server maintenance has been completed and all services restored to normal operation.

On Wednesday, October 24, 2012, the Opensim server will be down for maintenance from approximately 3PM to 6PM EST. We apologize in advance for any inconvenience.

3D Printing at DAAP’s Rapid Prototyping Center

While UCSIM focuses on developing virtual environments for education, from time to time we like to explore related technologies to discover potential benefits for future projects.  Recently, several UCSIM student staff have explored DAAP’s Rapid Prototyping Center (RPC) to learn more about converting 3D models into physical objects.

RPC uses 3D printers, CNC machining, Laser Cutting, and VACU Press machines allowing students many options to turn their projects into reality. Also, RPC is capable of two types of 3D printing: powder and plastic. Plastic is much more durable but very expensive. Powder is cheaper and usually less complicated when it comes to printing. Furthermore, a few months ago RPC acquired new machines that will print powder in color as well. The colors print dull and desaturated but they serve as a great base layer for painting. Printing color also costs the same as printing standard white.

We started the process by getting in touch with Aaron Rucker, a specialist in RPC. He walked us through the various technical issues to be aware of when modelling a project that you intend to print using a 3D printer. We use Maya for many of our projects at UCSIM, but our existing models may not be optimized for 3D printing. For example, to make modelling easier, Maya uses quads so a face can have four edges. The picture of the cube below shows what quads look like.

However the printing software doesn’t support this feature and might create problems when printing. The best way to avoid this is to run a cleanup operation. You can do this in Maya by going to the Polygons module, then Mesh>Cleanup. Scroll down to Fix by Tessellation and check 4-sided faces and Faces with more than 4 sides. This will keep the shape of the model but add edges so every face becomes the simplest shape there is, a triangle. This allows the printing software to read your model without any problems.

As a test object, we made a model of a scarecrow loosely inspired by the aesthetic of Backwater Gospel, then sent it to be printed overnight. This is when we found some problems with our model. The head and body were solid objects and the thin legs couldn’t support the weight so one leg broke off. Some drifting occurred with the fingers and left hand because they were so small, so the air blower simply blew them away. Since we didn’t have time to make changes in the model and schedule another print, we took what we had and glued it back together using epoxy and copper wires for support. The left image shows the computer model and the right shows the printed model.

While we still aren’t sure how we might incorporate 3D printing into future UCSIM projects, it was definitely worth learning new techniques and developing a better understanding of the whole process of 3D printing.

UC students and staff who are interested in using RPC for 3D printing should contact Aaron Rucker for more information. He knows all the ins and outs of printing models and can tell you what changes to make in your model before printing.  And many thanks to Aaron for all of the great information about RPC!

Exploring Non-Player Characters (NPCs) in Opensim

by Matt NeCamp

We recently updated several test regions on UCSIM’s OpenSim virtual campus to the most recent version of the software, and this update provided several new features to play around with, including options for creating non-player characters or NPCs. We scripted some simple NPC mechanics in OpenSim to see what they can do, and we were successful in creating several NPCs where the simulator itself believes they’re actual Agents logged in.  With relatively simple programming, you can tell them where to go, sit, and even wave their hand or similar gestures on command. We can even create and adjust the appearance to construct an NPC to fit in any scene.

Although we’ve only begun to explore the options with NPCs, even these basic functions open up several new ways that can help make our virtual campus seem more realistic. We can create NPCs that walk around campus to fill up space and add a sense of community, or even create helpful NPC’s that users can interact with to provide useful information on demand.  Feel free to check out the YouTube video below where 3DLES demonstrates how they use NPCs as a tour guide.

 

We can even use NPCs for technical support and to test our server.  Although NPCs don’t use as many resources as a real user, we can still use them to perform stress tests mimicking high traffic situations. As you can see below we created hundreds of NPCs to push our virtual world to the limit, taking in useful data that can help improve the overall service of our virtual campus.

An army of NPCs

Tips for Creating Models in Maya for Use in Unity3D

by Darrell Wilson

One of the projects we are working on is a virtual model of the UC campus, including a number of models of objects like benches, street lamps, etc. to help bring the environment to life. We’re using Maya to model the objects and then import the finished model into Unity, including the campus itself.

In the course of this project, I’ve made roughly 20+ objects so far, and have learned a few tricks that are helpful for creating a model in Maya and importing it into Unity.

Use Reference Images

Making a car model was actually a bit more complicated than I predicted. Like any model, I first collected 4 reference images (2 sides, back, and front; I decided not to use a top view to save time) and gathered them inside a private folder on my desktop to import over later. After opening Maya up, I created a cube, then deleted the top, bottom, and one of the sides, leaving 3 faces for me to import each of my reference pictures on.

The steps to create a reference image in Maya are Assign material  > Lambert, then go to material attributes and on the attribute editor on the left, there is a checkered box to the right of color. Click  File > and upload each picture. The textures are now UVs which you can edit in the UV editor! I normalized the UVs afterwards to bring the pictures back in focus, and then clicked on the pictures, and in attribute editor under render stats, unclicked double stats to make modeling a bit easier. (Note­­: I modeled one side first, in order to mirror it, so I wouldn’t have to do twice the work.)

Combining Multiple UV Sets in Maya to Single UV Set For Use in Unity

Shortly after modeling my Acura car (roughly around 80 prims), I realized that Unity is only able to import one UV set per mesh, however my car model had four UV sets:  the front, 2 sides, and back. I needed a way to combine the four UV sets into one in order to properly texture the model in Unity, and it turns out there’s an easy way to do that.  In Maya, individually click on the faces for a particular side (Shift + click for more than 1 face), while selecting Create UVs > Create UVs based on camera. I simply chose a camera position (except perspective) before creating a UV in order to get the correct shot of the sides of the car. The four UVs were then all placed on one UV set.

Create a “Photoshop Network” to Edit in Photoshop and Maya Simultaneously

One useful built-in feature of Maya is that you can color your model directly in Photoshop by creating a PSD network. In the polygon tabs,  Edit UVs > Texture Editor > Image > Create PSD Network. Creating a PSD network should automatically bring up Photoshop. If not, an alternative would be to take a UV snapshot ( Edit UVs > Texture Editor > Polygon > UV Snapshot ), and then upload the picture into Photoshop or any other coloring program. As for me, I took a slightly different approach and textured the car separately in Flash to get more of an edgy, cartoon feel, and then exported it over as PNGs inside Photoshop. Saved, uploaded, and voila! A car ready to use in Unity.

Using an Animation to Create Realistic Cloth, Sheets, Flags, Etc.

Another object on my to-do list was a model of an American flag.  Making a realistic, 3D cloth was extremely easy in this tutorial using nCloth and nConstrict on a planar model.

However, I needed a static cloth, and not a cloth animation. After playing through the animation to see which position I wanted the cloth in, I held it there, deleted all the frames but that specific one, and then duplicated the object to ensure it was not an animation. I then imported it to Unity to test it, and it came out just as I intended.

Do you have a good tip for bringing models from Maya into Unity? Share them in the comments!

Unity Programming Tutorials and Setting up Android for Unity

by Alex Davis

I have become more comfortable working with Unity. However, I only have practical knowledge on the design and artistic side of games and simulation development. The other side is, of course, programming all the assets you have created. I found a guy who recently started a live streaming video series called Cooking With Unity.  His show airs every Monday at 8:00pm PDT and you can watch previous episodes at any time. His tutorials mainly focus on programming using C#. He has been incredibly helpful for learning some basic programming skills.

Right now I am following along with his Tower Defence tutorial. This is what I have so far:

tower defence game in development

I have enemies (red cubes) spawn from the right and follow along a path outlined with moving arrows. You can click anywhere on the grass and spawn a tower (green cube) that will shoot at an enemy when it is within range. You can also select which tower you want to spawn by clicking the towers on the top. The best part about this tutorial is that it came with a script that will detect mouse clicks as well as touches. This means you can publish the game for both computers and mobile devices that Unity supports.

Speaking of mobile devices, we have been experimenting with getting Unity set up for building Android games and simulations. I used this video to help me get set up.

Basically, you need to download Unity and purchase a license to build for Android. Then download the Android SDK and copy “adb.exe”, “AdbWinApi.dll”, and “AdbWinUsbApi.dll” files from the platform-tools folder to the tools folder. Then Unity should be ready to publish to Android and you can start building your game.

When you want to test your game or simulation on an Android device, go to File> Build settings and make sure you have selected Android as your platform. Check the player setting to make any refinements you wish to make. When you are done, click build and Unity will make an .apk file. Plug your device into your computer and mount it. Then copy the .apk file into you device. Now you need to install the .apk. To do this, I use Astro. Open Astro, select your game or simulation, click Open App Manager, and install. Your game or simulation should install and will show up in your app drawer.

As of now I have only created a simple proof-of-concept game to my device where you drive a truck around a grid. But now that I know the process I can finally start creating games and simulations for Android!

testing a unity game on android

University of Cincinnati Main Campus in Second Life – Back Online

Update:  The University of Cincinnati main region in Second Life is now back online.  See the Linden Lab status blog for more  updates.

. . .

It has come to our attention that Linden Lab, the company that maintains the Second Life virtual world, is having technical difficulties today and is in the middle of unplanned maintenance – see the announcement on their website.

Unfortunately, this unplanned maintenance has caused the University of Cincinnati main campus region to be offline, and we do not yet have an estimate of when it will be back up.  Campus users can still log into Second Life, and have the option to teleport to one of our secondary regions by clicking the SLurl (Second Life URLs) below:

UC Sandbox Region (University of Cincinnati3):
http://slurl.com/secondlife/University%20of%20Cincinnati3/108/218/24

Point of Care Center for Emerging Neuro Technologies (POC-CENT) (University of Cincinnati5):
http://slurl.com/secondlife/University%20of%20Cincinnati5/169/107/25 

We will update this post when we have more information about when the main University of Cincinnati campus region will be back online.  We apologize for any inconvenience and thanks for your patience during this unplanned downtime.

New Starter Avatars for Opensim Campus

Visitors to the University of Cincinnati virtual Opensim campus now have a number of starter avatars to choose from when creating their user accounts.

Designed by DAAP Digital Design student Ayan Daniels (Ayan Deluxe), the starter avatars offer a variety of choices.  “When students, professors, and interested parties log into the virtual campus, they will have a variety of options for their avatars,” said Ayan, “These avatars range in ethnicity, age, gender, weight, and style, so anyone will be able to find an avatar that looks like them, makes them laugh, or opens their imagination.”

 

To create a new account, visit the Connect to UC Opensim page and follow the instructions to create an account and download the viewer software.

SL & OpenSim using Kinect

Kinect Research

The methods of using Kinect with OpenSim vary. In our case, we started out the less messy way and used a third party program to interpret gestures to play pre-made animations. Essentially this is the same idea as pressing a button on the keyboard except more physical activity must be used to press the ‘keys’. We used software developed by USC called Flexible Action and Articulation Skeleton Toolkit (FAAST). To use this, you need to follow the installation instructions and install the correct drivers. Once your Kinect recognizes you, you can run FAAST. The key bindings we used for this are:

walk 3 key_hold up_arrow
jump 5 key_hold e
crouch 20 key_hold c
right_arm_forward 15 key_hold w
left_arm_forward 15 key_hold s
right_arm_out 15 key_hold right_arrow
left_arm_out 15 key_hold left_arrow

This means that walking in place will move your avatar forward, jumping 5 inches off the ground will make you fly, and crouching will make your avatar drop back down to the ground. If you put your arm out to the right or left, your avatar will turn in that direction.

One other scenario of using Kinect with OpenSim is streaming data straight into OpenSim from the Kinect. The University of Tokyo’s Network System Laboratory has been working on software that can stream motion captured data into OpenSim. Currently their software is called SLKinect2, but will soon be making a name change to Rinions. Their software uses the OpenNI and NITE open source drivers, and their SLKinect2 software. To use this, the user must upload an animation .bvh file and check the looped button during upload. This allows the reception of the streaming data from the SLKinect2 software. The Network Systems Lab has great documentation and easy to follow instructions on their website . The SLKinect2 Manual can be found here.